You are on page 1of 2069

Tell us about your PDF experience.

Azure SQL documentation


Find documentation about the Azure SQL family of SQL Server database engine
products in the cloud: Azure SQL Database, Azure SQL Managed Instance, and SQL
Server on Azure VM.

Azure SQL Database

h WHAT'S NEW

What's new?

e OVERVIEW

What is SQL Database?

What are elastic pools?

f QUICKSTART

Create SQL Database

Configure firewall

q VIDEO

Azure SQL Database overview

p CONCEPT

Migrate from SQL Server

Advanced security

Business continuity

Monitoring and tuning

T-SQL differences with SQL Server

Azure SQL Managed Instance

h WHAT'S NEW
What's new?

e OVERVIEW

What is SQL Managed Instance?

What are instance pools?

f QUICKSTART

Create SQL Managed Instance

Configure VM to connect

Restore sample database

q VIDEO

Azure SQL Managed Instance overview

p CONCEPT

Migrate from SQL Server

Advanced security

Business continuity

Monitoring and tuning

T-SQL differences with SQL Server

SQL Server on Azure VM

e OVERVIEW

What's new?

What is SQL Server on Windows VM?

What is SQL Server on Linux VM?

f QUICKSTART

Create SQL on Azure VM (Windows)

Create SQL on Azure VM (Linux)


q VIDEO

SQL Server on Azure VM overview

p CONCEPT

Security considerations

High availability & disaster recovery

Performance guidelines

Learn Azure SQL

d TRAINING

Azure SQL for beginners

Azure SQL fundamentals

Azure SQL hands-on labs

Azure SQL bootcamp

Educational SQL resources

Migrate from SQL Server

e OVERVIEW

Migrating SQL Server Workloads FAQ

` DEPLOY

Azure SQL Database

Azure SQL Managed Instance

SQL Server on Azure VMs

Reference

` DEPLOY
Azure CLI samples

PowerShell samples

ARM template samples

a DOWNLOAD

SQL Server Management Studio (SSMS)

Azure Data Studio

SQL Server Data Tools

Visual Studio 2019

i REFERENCE

Migration guide

Transact-SQL (T-SQL)

Azure CLI

PowerShell

REST API

Connect and query

f QUICKSTART

Overview

SQL Server Management Studio (SSMS)

Azure Data Studio

Azure portal

Visual Studio (.NET)

Visual Studio Code

.NET Core

Python

With Azure AD and SqlClient


Development

e OVERVIEW

Application development

Connect apps to Azure SQL

Disaster recovery app design

Managing rolling upgrades (SQL DB)

Development strategies (SQL VM)

SaaS database tenancy patterns

c HOW-TO GUIDE

Design first database (SSMS)

Design first database (C#)


What is Azure SQL?
Article • 04/24/2023

Applies to:
Azure SQL Database
Azure SQL Managed Instance
SQL Server
on Azure VM

Azure SQL is a family of managed, secure, and intelligent products that use the SQL
Server database engine in the Azure cloud.

Azure SQL Database: Support modern cloud applications on an intelligent,


managed database service that includes serverless compute.
Azure SQL Managed Instance: Modernize your existing SQL Server applications at
scale with an intelligent fully managed instance as a service, with almost 100%
feature parity with the SQL Server database engine. Best for most migrations to the
cloud.
SQL Server on Azure VMs: Lift-and-shift your SQL Server workloads with ease and
maintain 100% SQL Server compatibility and operating system-level access.

Azure SQL is built upon the familiar SQL Server engine, so you can migrate applications
with ease and continue to use the tools, languages, and resources you're familiar with.
Your skills and experience transfer to the cloud, so you can do even more with what you
already have.

Learn how each product fits into Microsoft's Azure SQL data platform to match the right
option for your business requirements. Whether you prioritize cost savings or minimal
administration, this article can help you decide which approach delivers against the
business requirements you care about most.

If you're new to Azure SQL, check out the What is Azure SQL video from our in-depth
Azure SQL video series:

https://learn.microsoft.com/shows/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-
61/player

Overview
In today's data-driven world, driving digital transformation increasingly depends on our
ability to manage massive amounts of data and harness its potential. But today's data
estates are increasingly complex, with data hosted on-premises, in the cloud, or at the
edge of the network. Developers who are building intelligent and immersive
applications can find themselves constrained by limitations that can ultimately impact
their experience. Limitations arising from incompatible platforms, inadequate data
security, insufficient resources and price-performance barriers create complexity that
can inhibit app modernization and development.

One of the first things to understand in any discussion of Azure versus on-premises SQL
Server databases is that you can use it all. Microsoft's data platform leverages SQL
Server technology and makes it available across physical on-premises machines, private
cloud environments, third-party hosted private cloud environments, and the public
cloud.

Fully managed and always up to date


Spend more time innovating and less time patching, updating, and backing up your
databases. Azure is the only cloud with evergreen SQL that automatically applies the
latest updates and patches so that your databases are always up to date—eliminating
end-of-support hassle. Even complex tasks like performance tuning, high availability,
disaster recovery, and backups are automated, freeing you to focus on applications.

Protect your data with built-in intelligent security


Azure constantly monitors your data for threats. With Azure SQL, you can:

Remediate potential threats in real time with intelligent advanced threat detection
and proactive vulnerability assessment alerts.
Get industry-leading, multi-layered protection with built-in security controls
including T-SQL, authentication, networking, and key management.
Take advantage of the most comprehensive compliance coverage of any cloud
database service.

Business motivations
There are several factors that can influence your decision to choose between the
different data offerings:

Cost: Both platform as a service (PaaS) and infrastructure as a service (IaaS) options
include base price that covers underlying infrastructure and licensing. However,
with the IaaS option you need to invest additional time and resources to manage
your database, while in PaaS you get administration features included in the price.
IaaS enables you to shut down resources while you aren't using them to decrease
the cost, while PaaS is always running unless you drop and re-create your
resources when they're needed.
Administration: PaaS options reduce the amount of time that you need to invest to
administer the database. However, it also limits the range of custom administration
tasks and scripts that you can perform or run. For example, the CLR isn't supported
with SQL Database, but is supported for an instance of SQL Managed Instance.
Also, no deployment options in PaaS support the use of trace flags.
Service-level agreement: Both IaaS and PaaS provide high, industry standard SLA.
PaaS option guarantees 99.99% SLA, while IaaS guarantees 99.95% SLA for
infrastructure, meaning that you need to implement additional mechanisms to
ensure availability of your databases. You can attain 99.99% SLA by creating an
additional SQL virtual machine, and implementing the SQL Server Always On
availability group high availability solution.
Time to move to Azure: SQL Server on Azure VM is the exact match of your
environment, so migration from on-premises to the Azure VM is no different than
moving the databases from one on-premises server to another. SQL Managed
Instance also enables easy migration; however, there might be some changes that
you need to apply before your migration.

Service comparison

As seen in the diagram, each service offering can be characterized by the level of
administration you have over the infrastructure, and by the degree of cost efficiency.

In Azure, you can have your SQL Server workloads running as a hosted service (PaaS ),
or a hosted infrastructure (IaaS ) supporting the software layer, such as Software-as-a-
Service (SaaS) or an application. Within PaaS, you have multiple product options, and
service tiers within each option. The key question that you need to ask when deciding
between PaaS or IaaS is - do you want to manage your database, apply patches, and
take backups - or do you want to delegate these operations to Azure?

Azure SQL Database


Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that
falls into the industry category of Platform-as-a-Service (PaaS).

Best for modern cloud applications that want to use the latest stable SQL Server
features and have time constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise
Edition of SQL Server. SQL Database has two deployment options built on
standardized hardware and software that is owned, hosted, and maintained by
Microsoft.

With SQL Server, you can use built-in features and functionality that requires extensive
configuration (either on-premises or in an Azure virtual machine). When using SQL
Database, you pay-as-you-go with options to scale up or out for greater power with no
interruption. SQL Database has some additional features that aren't available in SQL
Server, such as built-in high availability, intelligence, and management.

Azure SQL Database offers the following deployment options:

As a single database with its own set of resources managed via a logical SQL server.
A single database is similar to a contained database in SQL Server. This option is
optimized for modern application development of new cloud-born applications.
Hyperscale and serverless options are available.
An elastic pool, which is a collection of databases with a shared set of resources
managed via a logical server. Single databases can be moved into and out of an
elastic pool. This option is optimized for modern application development of new
cloud-born applications using the multi-tenant SaaS application pattern. Elastic
pools provide a cost-effective solution for managing the performance of multiple
databases that have variable usage patterns.

7 Note

Elastic pools for Hyperscale are currently in preview.

Azure SQL Managed Instance


Azure SQL Managed Instance falls into the industry category of Platform-as-a-Service
(PaaS), and is best for most migrations to the cloud. SQL Managed Instance is a
collection of system and user databases with a shared set of resources that is lift-and-
shift ready.

Best for new applications or existing on-premises applications that want to use the
latest stable SQL Server features and that are migrated to the cloud with minimal
changes. An instance of SQL Managed Instance is similar to an instance of the
Microsoft SQL Server database engine offering shared resources for databases and
additional instance-scoped features.
SQL Managed Instance supports database migration from on-premises with
minimal to no database change. This option provides all of the PaaS benefits of
Azure SQL Database but adds capabilities that were previously only available in
SQL Server VMs. This includes a native virtual network and near 100% compatibility
with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL
Server access and feature compatibility for migrating SQL Servers to Azure.

SQL Server on Azure VM


SQL Server on Azure VM falls into the industry category Infrastructure-as-a-Service (IaaS)
and allows you to run SQL Server inside a fully managed virtual machine (VM) in Azure.

SQL Server installed and hosted in the cloud runs on Windows Server or Linux
virtual machines running on Azure, also known as an infrastructure as a service
(IaaS). SQL virtual machines are a good option for migrating on-premises SQL
Server databases and applications without any database change. All recent
versions and editions of SQL Server are available for installation in an IaaS virtual
machine.
Best for migrations and applications requiring OS-level access. SQL virtual
machines in Azure are lift-and-shift ready for existing applications that require fast
migration to the cloud with minimal changes or no changes. SQL virtual machines
offer full administrative control over the SQL Server instance and underlying OS for
migration to Azure.
The most significant difference from SQL Database and SQL Managed Instance is
that SQL Server on Azure Virtual Machines allows full control over the database
engine. You can choose when to start maintenance activities including system
updates, change the recovery model to simple or bulk-logged, pause or start the
service when needed, and you can fully customize the SQL Server database engine.
With this additional control comes the added responsibility to manage the virtual
machine.
Rapid development and test scenarios when you don't want to buy on-premises
hardware for SQL Server. SQL virtual machines also run on standardized hardware
that is owned, hosted, and maintained by Microsoft. When using SQL virtual
machines, you can either pay-as-you-go for a SQL Server license already included
in a SQL Server image or easily use an existing license. You can also stop or resume
the VM as needed.
Optimized for migrating existing applications to Azure or extending existing on-
premises applications to the cloud in hybrid deployments. In addition, you can use
SQL Server in a virtual machine to develop and test traditional SQL Server
applications. With SQL virtual machines, you have the full administrative rights over
a dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when
an organization already has IT resources available to maintain the virtual machines.
These capabilities allow you to build a highly customized system to address your
application's specific performance and availability requirements.

Comparison table
Additional differences are listed in the following table, but both SQL Database and SQL
Managed Instance are optimized to reduce overall management costs to a minimum for
provisioning and managing many databases. Ongoing administration costs are reduced
since you don't have to manage any virtual machines, operating system, or database
software. You don't have to manage upgrades, high availability, or backups.

In general, SQL Database and SQL Managed Instance can dramatically increase the
number of databases managed by a single IT or development resource. Elastic pools
also support SaaS multi-tenant application architectures with features including tenant
isolation and the ability to scale to reduce costs by sharing resources across databases.
SQL Managed Instance provides support for instance-scoped features enabling easy
migration of existing applications, as well as sharing resources among databases.
Whereas SQL Server on Azure VMs provide DBAs with an experience most similar to the
on-premises environment they're familiar with.

Azure SQL Azure SQL SQL Server on Azure VM


Database Managed Instance
Azure SQL Azure SQL SQL Server on Azure VM
Database Managed Instance

Supports most Supports almost all You have full control over the SQL Server engine.
on-premises on-premises Supports all on-premises capabilities.

database-level instance-level and Up to 99.99% availability.

capabilities. The database-level Full parity with the matching version of on-premises
most commonly capabilities. High SQL Server.

used SQL Server compatibility with Fixed, well-known Database Engine version.

features are SQL Server.


Easy migration from SQL Server.

available.
99.99% availability Private IP address within Azure Virtual Network.

99.995% guaranteed.
You have the ability to deploy application or services
availability Built-in backups, on the host where SQL Server is placed.
guaranteed.
patching, recovery.

Built-in backups, Latest stable


patching, Database Engine
recovery.
version.

Latest stable Easy migration from


Database Engine SQL Server.

version.
Private IP address
Ability to assign within Azure Virtual
necessary Network.

resources Built-in advanced


(CPU/storage) to intelligence and
individual security.

databases.
Online change of
Built-in resources
advanced (CPU/storage).
intelligence and
security.

Online change of
resources
(CPU/storage).
Azure SQL Azure SQL SQL Server on Azure VM
Database Managed Instance

Migration from There's still some You may use manual or automated backups.

SQL Server minimal number of You need to implement your own High-Availability
might be SQL Server features solution.

challenging.
that aren't available.
There's a downtime while changing the
Some SQL Server Configurable resources(CPU/storage)
features aren't maintenance
available.
windows.

Configurable Compatibility with


maintenance the SQL Server
windows.
version can be
Compatibility achieved only using
with the SQL database
Server version compatibility levels.
can be achieved
only using
database
compatibility
levels.

Private IP
address support
with Azure
Private Link.

Databases of up Up to 16 TB. SQL Server instances with up to 256 TB of storage. The


to 100 TB. instance can support as many databases as needed.

On-premises Native virtual With SQL virtual machines, you can have applications
application can network that run partly in the cloud and partly on-premises. For
access data in implementation and example, you can extend your on-premises network
Azure SQL connectivity to your and Active Directory Domain to the cloud via Azure
Database. on-premises Virtual Network. For more information on hybrid cloud
environment using solutions, see Extending on-premises data solutions to
Azure Express Route the cloud.
or VPN Gateway.

Cost
Whether you're a startup that is strapped for cash, or a team in an established company
that operates under tight budget constraints, limited funding is often the primary driver
when deciding how to host your databases. In this section, you learn about the billing
and licensing basics in Azure associated with the Azure SQL family of services. You also
learn about calculating the total application cost.
Billing and licensing basics
Currently, both SQL Database and SQL Managed Instance are sold as a service and are
available with several options and in several service tiers with different prices for
resources, all of which are billed hourly at a fixed rate based on the service tier and
compute size you choose. For the latest information on the current supported service
tiers, compute sizes, and storage amounts, see DTU-based purchasing model for SQL
Database and vCore-based purchasing model for both SQL Database and SQL Managed
Instance.

With SQL Database, you can choose a service tier that fits your needs from a wide
range of prices starting from 5$/month for basic tier and you can create elastic
pools to share resources among databases to reduce costs and accommodate
usage spikes.
With SQL Managed Instance, you can also bring your own license. For more
information on bring-your-own licensing, see License Mobility through Software
Assurance on Azure or use the Azure Hybrid Benefit calculator to see how to
save up to 40%.

In addition, you're billed for outgoing Internet traffic at regular data transfer rates . You
can dynamically adjust service tiers and compute sizes to match your application's
varied throughput needs.

With SQL Database and SQL Managed Instance, the database software is automatically
configured, patched, and upgraded by Azure, which reduces your administration costs.
In addition, its built-in backup capabilities help you achieve significant cost savings,
especially when you have a large number of databases.

With SQL on Azure VMs, you can use any of the platform-provided SQL Server images
(which includes a license) or bring your SQL Server license. All the supported SQL Server
versions (2008R2, 2012, 2014, 2016, 2017, 2019) and editions (Developer, Express, Web,
Standard, Enterprise) are available. In addition, Bring-Your-Own-License versions (BYOL)
of the images are available. When using the Azure provided images, the operational cost
depends on the VM size and the edition of SQL Server you choose. Regardless of VM
size or SQL Server edition, you pay per-minute licensing cost of SQL Server and the
Windows or Linux Server, along with the Azure Storage cost for the VM disks. The per-
minute billing option allows you to use SQL Server for as long as you need without
buying addition SQL Server licenses. If you bring your own SQL Server license to Azure,
you are charged for server and storage costs only. For more information on bring-your-
own licensing, see License Mobility through Software Assurance on Azure . In addition,
you are billed for outgoing Internet traffic at regular data transfer rates .
Calculating the total application cost
When you start using a cloud platform, the cost of running your application includes the
cost for new development and ongoing administration costs, plus the public cloud
platform service costs.

For more information on pricing, see the following resources:

SQL Database & SQL Managed Instance pricing


Virtual machine pricing for SQL and for Windows
Azure Pricing Calculator

Administration
For many businesses, the decision to transition to a cloud service is as much about
offloading complexity of administration as it's cost. With IaaS and PaaS, Azure
administers the underlying infrastructure and automatically replicates all data to provide
disaster recovery, configures and upgrades the database software, manages load
balancing, and does transparent failover if there's a server failure within a data center.

With SQL Database and SQL Managed Instance, you can continue to administer
your database, but you no longer need to manage the database engine, the
operating system, or the hardware. Examples of items you can continue to
administer include databases and logins, index and query tuning, and auditing and
security. Additionally, configuring high availability to another data center requires
minimal configuration and administration.
With SQL on Azure VM, you have full control over the operating system and SQL
Server instance configuration. With a VM, it's up to you to decide when to
update/upgrade the operating system and database software and when to install
any additional software such as anti-virus. Some automated features are provided
to dramatically simplify patching, backup, and high availability. In addition, you can
control the size of the VM, the number of disks, and their storage configurations.
Azure allows you to change the size of a VM as needed. For information, see
Virtual Machine and Cloud Service Sizes for Azure.

Service-level agreement (SLA)


For many IT departments, meeting up-time obligations of a service-level agreement
(SLA) is a top priority. In this section, we look at what SLA applies to each database
hosting option.
For both Azure SQL Database and Azure SQL Managed Instance, Microsoft provides an
availability SLA of 99.99%. For the latest information, see Service-level agreement .

For SQL on Azure VM, Microsoft provides an availability SLA of 99.95% for two virtual
machines in an availability set, or 99.99% for two virtual machines in different availability
zones. This means that at least one of the two virtual machines will be available for the
given SLA, but it does not cover the processes (such as SQL Server) running on the VM.
For the latest information, see the VM SLA . For database high availability (HA) within
VMs, you should configure one of the supported high availability options in SQL Server,
such as Always On availability groups. Using a supported high availability option doesn't
provide an additional SLA, but allows you to achieve >99.99% database availability.

Time to move to Azure


Azure SQL Database is the right solution for cloud-designed applications when
developer productivity and fast time-to-market for new solutions are critical. With
programmatic DBA-like functionality, it's perfect for cloud architects and developers as it
lowers the need for managing the underlying operating system and database.

Azure SQL Managed Instance greatly simplifies the migration of existing applications to
Azure, enabling you to bring migrated database applications to market in Azure quickly.

SQL on Azure VM is perfect if your existing or new applications require large databases
or access to all features in SQL Server or Windows/Linux, and you want to avoid the time
and expense of acquiring new on-premises hardware. It's also a good fit when you want
to migrate existing on-premises applications and databases to Azure as-is - in cases
where SQL Database or SQL Managed Instance isn't a good fit. Since you don't need to
change the presentation, application, and data layers, you save time and budget on
rearchitecting your existing solution. Instead, you can focus on migrating all your
solutions to Azure and in doing some performance optimizations that may be required
by the Azure platform. For more information, see Performance Best Practices for SQL
Server on Azure Virtual Machines.

Create and manage Azure SQL resources with


the Azure portal
The Azure portal provides a single page where you can manage all of your Azure SQL
resources including your SQL Server on Azure virtual machines (VMs).

To access the Azure SQL page, from the Azure portal menu, select Azure SQL or search
for and select Azure SQL in any page.
7 Note

Azure SQL provides a quick and easy way to access all of your SQL resources in the
Azure portal, including single and pooled databases in Azure SQL Database as well
as the logical server hosting them, Azure SQL Managed Instances, and SQL Server
on Azure VMs. Azure SQL is not a service or resource, but rather a family of SQL-
related services.

To manage existing resources, select the desired item in the list. To create new Azure
SQL resources, select + Create.

After selecting + Create, view additional information about the different options by
selecting Show details on any tile.

For details, see:


Create a single database
Create an elastic pool
Create a managed instance
Create a SQL virtual machine

Next steps
See Your first Azure SQL Database to get started with SQL Database.
See Your first Azure SQL Managed Instance to get started with SQL Managed
Instance.
See SQL Database pricing .
See Azure SQL Managed Instance pricing .
See Provision a SQL Server virtual machine in Azure to get started with SQL Server
on Azure VMs.
Identify the right SQL Database or SQL Managed Instance SKU for your on-
premises database.
Migrate to Azure SQL
Find documentation on how to migrate to the Azure SQL family of SQL Server database
engine products in the cloud: Azure SQL Database, Azure SQL Managed Instance, and
SQL Server on Azure VM.

Azure SQL Database

b GET STARTED

Overview

From SQL Server

From Access

From DB2

From Oracle

From MySQL

From SAP ASE

Azure SQL Managed Instance

b GET STARTED

Overview

From SQL Server

From DB2

From Oracle

SQL Server on Azure VM

b GET STARTED

Overview

From SQL Server

From DB2
From Oracle

Migration tools

` DEPLOY

Azure Migrate

Azure Database Migration Service (DMS)

Data Migration Assistant (DMA)

Transactional replication

Import & export service / BACPAC

Bulk copy

Azure Data Factory

SQL Data Sync


Migrate SQL Server workloads (FAQ)
Article • 03/24/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

SQL Server on Azure VM

Migrating on-premises SQL Server workloads and associated applications to the cloud
usually brings a wide range of questions which go beyond mere product feature
information.

This article provides a holistic view and helps understand how to fully unlock the value
when migrating to Azure SQL. The Modernize applications and SQL section covers
questions about Azure SQL in general as well as common application and SQL
modernization scenarios. The Business and technical evaluation section covers cost
saving, licensing, minimizing migration risk, business continuity, security, workloads and
architecture, performance and similar business and technical evaluation questions. The
last section covers the actual Migration and modernization process, including guidance
on migration tools.

Modernize applications and SQL

Azure SQL

What are the benefits of moving applications and SQL Server


workloads to Azure?

A migration to Azure brings optimized costs, flexibility and scalability, enhanced


security, compliance, improved business continuity, and simplified management and
monitoring.

What is Azure SQL?

Azure SQL is a family of services that use the SQL Server database engine in the Azure
Cloud. The following services belong to Azure SQL: Azure SQL Database (SQL Database),
Azure SQL Managed Instance (SQL Managed Instance) and SQL Server on Azure VMs.

What is the difference between migration and modernization to


Azure SQL?
Migration to Azure SQL involves moving applications, infrastructure, and data from one
location (for example, a company's on-premises datacenter) to Azure infrastructure. For
SQL Server customers, this means migrating your workloads while minimizing impact to
operations. You can reduce IT costs, enhance security and resilience, and achieve on-
demand scale.

Modernization to Azure SQL involves updating existing applications for newer


computing approaches and application frameworks and use of cloud-native
technologies. This can be achieved by using PaaS services such as Azure SQL Database
and Azure SQL Managed Instance, which provides extra benefits of app innovation,
agility, developer velocity, and cost optimization.

What does IaaS and PaaS mean?


Infrastructure as a service (IaaS) is a type of cloud computing service that offers
essential compute, storage, and networking resources on demand.

Platform as a service (PaaS) is a complete development and deployment environment


in the cloud, with resources that enable you to deliver everything from simple cloud-
based apps to sophisticated, cloud-enabled enterprise applications.

PaaS provides additional advantages over IaaS, such as shorter development cycles,
extra development capabilities without adding staff, affordable access to sophisticated
tools, to mention a few. Azure SQL provides both PaaS (SQL Managed Instance, SQL
Database) and IaaS (SQL VM) services.

How do I decide if I should move my SQL Server to a Virtual


Machine, SQL Managed Instance or SQL Database?

SQL Managed Instance is the right PaaS target to modernize your existing SQL
Server applications at scale providing almost all SQL Server features (including
instance-level features) while reducing the costs of server and database
management.

SQL Database is the most appropriate choice when building native cloud
applications, as it offers high elasticity and flexibility of choosing between
architectural and compute tiers, such as Serverless tier for increased elasticity and
Hyperscale tier for a highly scalable storage and compute resources.

If you need full control and customization, including OS access, you can opt for
SQL Server on Azure VM. The service comparison provides more details. A range
of migration tools helps making the optimal choice by providing an assessment of
target service compatibility and costs.

How can I reduce costs by moving to Azure SQL?

Moving to Azure brings savings in resource, maintenance, and real estate costs, in
addition to the ability to optimize workloads so that they cost less to run. Azure SQL
Managed Instance and SQL Database bring all the advantages of PaaS services,
providing automated performance tuning, backups, software patching and high-
availability, all of which entails enormous effort and cost when performing manually.

For example, SQL Managed Instance and SQL Database (single database and elastic
pool) come with built-in HA. Also, Business Critical (SQL Managed Instance) and
Premium (SQL Database) tiers provide read-only replicas at no additional cost, while
SQL Database Hyperscale tier allows HA and named secondary replicas for read scale-
out at no license cost. Additionally, Software Assurance customers can use their on-
premises SQL Server license on Azure by applying Azure Hybrid Benefit (AHB).
Software Assurance also lets you implement free passive HA and DR secondaries using
SQL VM.

In addition, every Azure SQL service provides you the option to reserve instances in
advance (1-3 years) and obtain significant additional savings. Dev/Test pricing plans
provide a way to further reduce development costs. Finally, check the following article
on how you can Optimize your Azure SQL Managed Instance cost with Microsoft Azure
Well-Architected Framework .

What is the best licensing path to save costs when moving existing
SQL Server workloads to Azure?
Unique to Azure, Azure Hybrid Benefit (AHB) is a licensing benefit that allows you
bringing your existing Windows Server and SQL Server licenses with Software Assurance
(SA) to Azure. Combined with reservations savings and extended security updates, AHB
can bring you up to 85% savings compared to pay-as-you-go pricing in Azure SQL. In
addition, make sure to check different Dev/Test pricing plans .

Applications and SQL modernization scenarios

Scenario 1: Data center move to the cloud: what is the


modernization path for applications and SQL Server databases?
Updating an organization's existing apps to a cloud first model can be achieved by
using fully managed application and data services including Azure App Service , Azure
Spring Apps , Azure SQL Database, Azure SQL Managed Instance and other PaaS
services. Azure Kubernetes Services (AKS) provides a managed container-based
approach within Azure. Application and Data Modernization in Azure is achieved
through several stages , with the most common scenario examples described within
the Cloud Adoption Framework.

Scenario 2: Reducing SQL Server costs: How can I reduce the cost
for my existing SQL Server fleet?
Moving to Azure SQL VMs, SQL Managed Instance or SQL Database brings savings in
resource, maintenance, and real estate costs. Using your SQL Server on-premises
licenses in Azure via Azure Hybrid Benefit , using Azure Reservations for SQL VM, SQL
Managed Instance and SQL Database vCores, and using constrained vCPU capable
Virtual Machines will give you a wide variety of options to build a cost-effective solution.

For implementing BCDR solutions in Azure SQL, you benefit from built-in HA replicas of
SQL Managed Instance and SQL Database or free passive HA and DR secondaries using
SQL VM. Also, Business Critical (SQL Managed Instance) and Premium (SQL Database)
tiers provide read-only replicas at no additional cost, while SQL Database Hyperscale tier
allows HA and named secondary replicas for read scale-out at no license cost. In
addition, make sure to check different Dev/Test pricing plans .

If you're interested to understand how you can save up to 64% by moving to Azure SQL
please check ESG report on The Economic Value of Migrating On-Premises SQL Server
Instances to Microsoft Azure SQL Solutions . Finally, check the following article on how
you can Optimize your Azure SQL Managed Instance cost with Microsoft Azure Well-
Architected Framework .

Scenario 3: Optimize application portfolio: How can I at the same


time modernize both my application portfolio and SQL Server
instances?

Application and Data Modernization in Azure is achieved through several stages, with
the most common scenario examples described within the Cloud Adoption Framework.

Scenario 4: SQL Server end of support: Which options do I have to


move to Azure SQL?
Once your SQL Server has reached the end of support stage, you have several
modernization options towards Azure SQL. One of the options is to migrate your
workload to an Azure SQL Managed Instance, which provides high feature parity with
the on-premises SQL Server product. Alternatively, with some additional effort, you can
move the workload to Azure SQL Database. These services run on SQL Server evergreen
features, effectively granting you "the end of End of Support".

Backward compatibility is provided via compatibility levels and database compatibility


level settings. Tools like Azure SQL Migration extension in Azure Data Studio or Data
Migration Assistant will help you identify possible incompatibilities.

Whenever a Platform-as-a-Service (PaaS) solution doesn't fit your workload, Azure SQL
Virtual Machines provide the possibility to do an as-is migration. By moving to Azure
SQL VM, you'll also receive free extended security patches which can provide significant
savings (for example, up to 69% for SQL Server 2012).

Scenario 5: Meeting regulatory compliance: How does Azure SQL


help meet regulatory compliance requirements?

Azure Policy has built-in policies that help organizations meet regulatory compliance. Ad
hoc and customized policies can also be created. For more information, see Azure Policy
Regulatory Compliance controls for Azure SQL Database and SQL Managed Instance.
For an overview of compliance offerings, you can consult Azure compliance
documentation.

Getting started, the holistic approach

How to prepare a migration business case?

The Microsoft Cloud Adoption Framework for Azure is a great starting point to help
you create and implement the business and technology strategy necessary for your
move to Azure.

Where can I find migration guides for Azure SQL?


The following guides help you discover, assess, and migrate from SQL Server to SQL VM,
SQL Managed Instance and SQL Database.

Do I have to modernize applications and SQL at the same time?


What are my options?
No. Feel free to take an iterative approach to modernizing each workload and
component.

Can I modernize SQL Server to SQL Managed Instance and just lift
and shift my application to a VM?
Yes. You can Connect your application to Azure SQL Managed Instance through different
scenarios, including when hosting it on a VM.

Business and technical evaluation

Total cost of ownership, licensing and benefits

How can I estimate Total Cost of Ownership (TCO) savings when


moving to Azure SQL?

Moving to Azure SQL brings significant TCO savings by improving operational efficiency
and business agility, as well as eliminating the need for on-premises hardware and
software. According to ESG report on The Economic Value of Migrating On-Premises
SQL Server Instances to Microsoft Azure SQL Solutions , you can save up to 47% when
migrating from on-premises to Azure SQL Virtual Machines (IaaS), and up to 64% when
migrating to Azure SQL Managed Instance or Azure SQL Database (PaaS).

What is the licensing model for SQL Managed Instance?

SQL Managed Instance licensing follows vCore-based licensing model, where you pay
for compute, storage, and backup storage resources. You can choose between several
service tiers (General Purpose, Business Critical) and hardware generations. The SQL
Managed Instance pricing page provides a full overview of possible SKUs and prices.

What is the licensing model for SQL Database?


SQL Database provides a choice between the vCore-based purchasing model and
Database transaction unit purchasing model. You can explore Pricing - Azure SQL
Database Single Database and learn about pricing options.

What subscription types are supported in SQL Managed Instance?

Check Supported subscription types for SQL Managed Instance.


Can I use my on-premises SQL Server license when moving to
Azure SQL?

If you own Software Assurance for core-based or qualifying subscription licenses for SQL
Server Standard Edition or SQL Server Enterprise Edition, you can use your existing SQL
Server license when moving to SQL Managed Instance, SQL Database or Azure VM by
applying Azure Hybrid Benefit (AHB). You can also simultaneously use these licenses
both in on-premises and Azure environments (dual use rights) for up to 180 days.

How do I move from SQL VM to SQL Managed Instance?


You can follow the same migration guide as for the on-premises SQL Server.

I'm using SQL Server subscription license. Can I use it to move to


Azure SQL?

Yes, qualifying subscription licenses can be used to pay Azure SQL services at a reduced
(base) rate by applying Azure Hybrid Benefit (AHB).

I'm using SQL Server CAL licenses. How can I move to Azure SQL?
SQL Server CAL licenses with appropriate license mobility rights can be used on Azure
SQL VMs, and on Azure SQL Dedicated Host.

What is Azure Hybrid Benefit (AHB)?


Unique to Azure, Azure Hybrid Benefit (AHB) is a licensing benefit that allows you
bringing your existing Windows Server and SQL Server licenses with Software Assurance
(SA) to Azure. AHB can bring you up to 85% savings compared to pay-as-you-go pricing
in Azure SQL, when combined with reservations savings and extended security updates.

How do I translate my SQL Server on-premises license to vCore


license in SQL Managed Instance, SQL Database, and SQL VM?
For every one (1) core of SQL Server Enterprise Edition, you get four (4) vCores of SQL
Managed Instance General Purpose tier or one (1) vCore of SQL Managed Instance
Business Critical tier. Similarly, one (1) core of SQL Server Standard Edition translates to
one (1) vCore of SQL Managed Instance General Purpose tier, while four (4) vCores of
SQL Server Standard Edition translate to one (1) vCore of SQL Managed Instance
Business Critical.
The Azure Hybrid Benefit August 2020 update provides an overview of possible core-
to-vCore conversions for SQL Managed Instance, SQL Database and SQL VM. Applicable
AHB rights are also described in the Product Terms . You can also use the Azure Hybrid
Benefit Savings Calculator to calculate the exact savings for your SQL Server estate.

Is Software Assurance (SA) required for using SQL Server license on


Azure SQL?
Software Assurance is a licensing program that can be applied to on-premises SQL
Server licenses, allowing license mobility, AHB, and other benefits. SA is required if AHB
is to be invoked for using existing SQL Server licenses (with SA) when moving to Azure
SQL. Without SA + AHB, customers are charged with PAYG pricing.

Alternatively, the outsourcing software management terms applicable to SQL server


licenses acquired prior to October 1, 2019 permit you to allocate your existing licenses
to Azure Dedicated Host just as you would license a server in your own data center: see
Pricing - Dedicated Host Virtual Machines .

Do I have to pay for high availability (HA) in SQL Managed Instance


and SQL Database?

Both General Purpose and Business Critical tiers of SQL Managed Instance and SQL
Database are built on top of inherent high availability architecture. This way, there's no
extra charge for HA. For SQL Database Hyperscale tier HA replica is charged.

Do I have to pay for HA and DR replicas for Azure SQL?


If you have Software Assurance, on Azure SQL VM you can implement high availability
(HA) and disaster recovery (DR) plans with SQL Server without incurring additional
licensing costs for the passive disaster recovery instance. See the SQL VM
documentation for more details.

Do I have to pay for disaster recovery (DR) in SQL Managed


Instance and SQL Database?
Yes. These are separate costs.

Can I centrally manage Azure Hybrid Benefit for SQL Server across
the entire Azure subscription?
Yes. You can centrally manage your Azure Hybrid Benefit for SQL Server across the scope
of an entire Azure subscription or overall billing account. This feature is currently in
preview.

If I move some of SQL Servers, my workloads to SQL Managed


Instance and leave some workloads on-premises, can I manage all
my SQL licenses in one place?
You can centrally manage your licenses that are covered by Azure Hybrid Benefit for SQL
Server across the scope of an entire Azure subscription or overall billing account. This
data can be combined with an overview maintained by your licensing
partner/procurement department or obtaining licensing information by creating your
own custom licensing overview . Your licenses must be used either on-premises or in
the cloud, but you'll have 180 days of concurrent use rights while migrating servers.

How can I minimize downtime during the online migration?


The Link feature for Azure SQL Managed Instance offers the best possible minimum
downtime online migrations solution, meeting the needs of the most critical tier-1
applications. You can consult a full range of migration tools and technologies choose
the optimal for your use scenario.

Risk free migration with a hybrid strategy

Can I keep running on-premises, while modernizing my


applications in Azure?
With SQL Server 2016, 2017, 2019, and 2022, you can use the Link feature for Azure SQL
Managed Instance to create a hybrid connection between SQL Server and Azure SQL
Managed Instance. Data is replicated near real-time from SQL Server to Azure, and can
be used to modernize your workloads in Azure. You can use the replicated data in Azure
for read scale-out and for offloading analytics.

For how long can I keep the hybrid solution using Link feature for
Azure SQL Managed Instance running?
You can keep running the hybrid link for as long as needed: weeks, months, years at a
time, there are no restrictions on this.
Can I apply a hybrid approach and use Link feature for Azure SQL
Managed Instance in order to validate my migration strategy,
before migrating to Azure?
Yes, you can use your replicated data in Azure to test and validate your migration
strategy (performance, workloads and applications) prior to migrating to Azure.

Can I reverse migrate out of Azure SQL and go back to SQL Server
if necessary?
With SQL Server 2022, we offer the best possible solution to seamlessly move data back
with native backup and restore from SQL Managed Instance to SQL Server, completely
de-risking the migrations strategy to Azure.

Workloads and architecture

How do I determine which SQL Server workloads should be moved


to SQL Managed Instance?
When migrating SQL Server workloads to Azure SQL Managed Instance is normally the
first option, as most databases are "as-is" ready to migrate to SQL Managed Instance.
There are several tools available to help you assess your workload for compatibility with
Azure SQL Managed Instance.

You can use the Azure SQL Migration extension in Azure Data Studio or Data Migration
Assistant. Both tools provide help to detect issues that can affect the Azure SQL
Managed Instance migration and provide guidance on how to resolve them. After
verifying compatibility, you can run the SKU recommendation tool to analyze
performance data and recommend a minimal Azure SQL Managed Instance SKU. Make
sure to visit Azure Migrate which is a centralized hub to assess and migrate on-premises
servers, infrastructure, applications, and data to Azure.

How do I determine the appropriate SQL Managed Instance target


for a particular SQL Server on-premises workload: SQL Managed
Instance General Purpose or Business Critical tier?

SQL Managed Instance tier choice is guided by availability, performance (for example,
throughput, OIPS, latency) and feature (for example, in-memory OLTP) requirements.
The General Purpose tier is suitable for most generic workloads, as it already provides
HA architecture and a fully managed database engine with a storage latency between 5
ms and 10 ms. The Business Critical tier is designed for applications that require low-
latency (1-2 ms) responses from the storage layer, fast recovery, strict availability
requirements, and the ability to off-load analytics workloads.

How can I move a highly automated SQL Server to SQL Managed


Instance?

Infrastructure deployment automation of Azure SQL can be done with PowerShell and
CLI. Useful examples can be found in the Azure PowerShell samples for Azure SQL
Database and Azure SQL Managed Instance article. You can use Azure DevOps
Continuous Integration (CI) and Deployment (CD) Pipelines to fully embed automation
within your Infrastructure-as-Code practices.

Building your database models and scripts can also be integrated through Database
Projects with Visual Studio Code or Visual Studio. The use of Azure DevOps CI/CD
pipelines will enable deployment of your Database Projects to an Azure SQL
destination of your choice. Finally, service automation via third party tools is also
possible. For more information, see Azure SQL Managed Instance – Terraform
command .

Can I move only a specific workload out of an on-premises cluster


and what will be the impact to licensing and cost?
It's possible to only migrate the databases related to one workload towards an Azure
SQL Managed Instance. Creating and operating an Azure SQL Managed Instance will
require SQL Server licenses. Azure Hybrid Benefit will provide you with the ability to
reuse your licenses. Reach out to your licensing partner to review what possibilities can
be used with license mobility and restructuring your current licenses.

I maintain a highly consolidated SQL Server with multiple


applications running against it. Can I move it to SQL Managed
Instance?
Similarly as with on-premises SQL Server, you can consolidate and run multiple
databases on a single SQL Managed Instance instance, at the same time benefiting from
inherent high-availability architecture as well as shared security and management. SQL
Managed Instance also supports cross database queries.

How do I migrate SQL Server Business Intelligence workloads


(including Reporting Services and Analysis Services) that aren't
compatible with SQL Managed Instance?
Least effort migration path will be to move as-is and host the Business Intelligence
components on an Azure VM. The Reporting Services database can be hosted on Azure
SQL Managed Instance and Azure Data Factory provides the capability to lift and shift
SSIS solutions to the cloud. When building a modern solution is part of the migration
effort, Azure is providing a wide variety of services to build an Enterprise data
warehouse solution.

I'm using an application from an ISV that doesn't support SQL


Managed Instance / Azure. What are my options to move my
application to Azure and SQL Server to Azure SQL?
Migrating your environment as-is to an Azure Virtual Machine will be the safest option
when the ISV or vendor isn't providing any options. However, we encourage ISVs and
vendors that are working closely with Microsoft to review the options with Azure SQL
Managed Instance. Azure SQL Managed Instance provides backward compatibility
options through database compatibility level, guidance for Transact-SQL differences and
has implemented major features to Azure SQL Managed instance.

How do I keep the compatibility of my current SQL Server database


version in SQL Managed Instance?

Database compatibility level can be set in Managed Instance, as described on the Azure
SQL Blog .

Security

How does Azure SQL help in enhancing the database security


posture?

The security strategy follows the layered defense-in-depth approach: Network security +
Access management + Threat protection + Information Protection. You can read more
about SQL Database and SQL Managed Instance security capabilities. Azure-wide,
Microsoft Defender for Cloud provides a solution for Cloud Security Posture
Management (SCPM) and Cloud Workload Protection (CWP).

Business continuity
How can I adapt on-premises business continuity and disaster
recovery (BCDR) concepts into Azure SQL Managed Instance
concepts?
Most Azure SQL BCDR concepts have an equivalent in on-premises SQL Server
implementations. For example, the inherent high availability of SQL Managed Instance
General Purpose tier can be seen as a cloud equivalent for SQL Server FCI. Similarly,
SQL Managed Instance Business Critical tier can be seen as a cloud equivalent for an
Always On Availability Group with synchronous commit to a minimum number of
replicas. As a Disaster Recovery concept, an Auto-failover Group on SQL Managed
Instance is comparable to an Asynchronous Always On Availability Group with
asynchronous commit. SQL Database and SQL Managed Instance HA are backed by
Service-Level Agreements . You can find more on SQL Database and SQL Managed
Instance Business Continuity in the official documentation.

How are backups handled in Azure SQL PaaS services?

You can check documentation for automated backups in SQL Managed Instance and
SQL Database to learn about RPO, RTO, retention, scheduling and other backup
capabilities and features.

How is high availability (HA) achieved in SQL Managed Instance


and SQL Database?
SQL Managed Instance and Database are built on top of inherent high availability (HA)
architecture. This includes support for auto-failover groups and various other features.
You can choose between two HA architecture models: Standard availability model in
General Purpose service tier , or Premium availability model in Business Critical service
tier .

How does disaster recovery work in SQL Managed Instance and


SQL Database?
See the SQL Database and SQL Managed Instance documentation. SQL Managed
Instance Frequently Asked Questions provide information on DR options.

Performance and scale

How do I obtain better performance by moving on-premises SQL


Server to SQL Managed Instance, SQL Database or SQL VM?
Moving from on-premises will provide you with performance benefits due to the latest
SQL Server database engine features, cloud scaling flexibility and the newest generation
of underlying hardware. Find out why your SQL Server data belongs on Azure . You can
also read a recently published study by Principled Technologies benchmarking SQL
Managed Instance and SQL Server on Amazon Web Services (AWS) RDS. It's important
to ensure a proper sizing based on your requirements for CPU, memory and storage
(IOPS, latency, transaction log throughput and size). SQL Managed Instance and SQL
Database also provide a choice between different hardware configurations and service
tiers that provide additional means to reach target performance. Applications can also
take advantage of read scale-out abilities including with named replicas and geo-
replicas, and techniques such as database sharding.

How can I compare SQL Managed Instance performance to SQL


Server performance?
See the Performance section of the SQL Managed Instance FAQ for guidance on
performance comparison and tuning.

Migration and modernization process

I want to modernize SQL Server workloads to Azure SQL. What is


the next step?
A great place to start is joining the Azure Migration and Modernization Program .
When you start a migration project, a good practice is to form a dedicated Migration
team to formulate and execute the migration plan. If your company has an assigned
Microsoft or Microsoft Partner account team, they can provide guidance regarding
Migration team required skill set and overall process.

Where can I find migration guides to Azure SQL?


The following guides help you discover, assess, and migrate from SQL Server to SQL VM,
SQL Managed Instance and SQL Database. You can consult Azure Database Migration
Guides that also contains guides for migrating to another database targets.

Which migration tools can I use?

You can use the Azure SQL migration extension for Azure Data Studio for SQL Server
assessment and migration, or choose among other migration tools.
How do I minimize downtime during the online migration?
The Link feature for Azure SQL Managed Instance offers the best possible minimum
downtime online migrations solution, meeting the needs of the most critical tier-1
applications.

How can I optimize the costs once I migrate to Azure SQL?

Cost Optimization guidelines of Microsoft Azure Well-Architected Framework (WAF)


provide methodology to optimize costs for every Azure SQL service. You can also find
out more about WAF cost optimization highlights for SQL Managed Instance.

See also
Frequently asked questions for SQL Server on Azure VMs
Azure SQL Managed Instance frequently asked questions (FAQ)
Azure SQL Database Hyperscale FAQ
Azure Hybrid Benefit FAQ
Azure security baseline for Azure SQL
Article • 05/31/2023

This security baseline applies guidance from the Microsoft cloud security benchmark
version 1.0 to Azure SQL. The Microsoft cloud security benchmark provides
recommendations on how you can secure your cloud solutions on Azure. The content is
grouped by the security controls defined by the Microsoft cloud security benchmark and
the related guidance applicable to Azure SQL.

You can monitor this security baseline and its recommendations using Microsoft
Defender for Cloud. Azure Policy definitions will be listed in the Regulatory Compliance
section of the Microsoft Defender for Cloud dashboard.

When a feature has relevant Azure Policy Definitions, they are listed in this baseline to
help you measure compliance to the Microsoft cloud security benchmark controls and
recommendations. Some recommendations may require a paid Microsoft Defender plan
to enable certain security scenarios.

7 Note

Features not applicable to Azure SQL have been excluded. To see how Azure SQL
completely maps to the Microsoft cloud security benchmark, see the full Azure SQL
security baseline mapping file .

Security profile
The security profile summarizes high-impact behaviors of Azure SQL, which may result
in increased security considerations.

Service Behavior Attribute Value

Product Category Databases

Customer can access HOST / OS No Access

Service can be deployed into customer's virtual network True

Stores customer content at rest True

Network security
For more information, see the Microsoft cloud security benchmark: Network security.

NS-1: Establish network segmentation boundaries

Features

Virtual Network Integration

Description: Service supports deployment into customer's private Virtual Network


(VNet). Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Deploy the service into a virtual network. Assign private IPs to
the resource (where applicable) unless there is a strong reason to assign public IPs
directly to the resource.

Reference: Use virtual network service endpoints and rules for servers in Azure SQL
Database

Network Security Group Support

Description: Service network traffic respects Network Security Groups rule assignment
on its subnets. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Use Azure Virtual Network Service Tags to define network
access controls on network security groups or Azure Firewall configured for your Azure
SQL resources. You can use service tags in place of specific IP addresses when creating
security rules. By specifying the service tag name in the appropriate source or
destination field of a rule, you can allow or deny the traffic for the corresponding
service. Microsoft manages the address prefixes encompassed by the service tag and
automatically updates the service tag as addresses change. When using service
endpoints for Azure SQL Database, outbound to Azure SQL Database Public IP
addresses is required: Network Security Groups (NSGs) must be opened to Azure SQL
Database IPs to allow connectivity. You can do this by using NSG service tags for Azure
SQL Database.
Reference: Use virtual network service endpoints and rules for servers in Azure SQL
Database

NS-2: Secure cloud services with network controls

Features

Azure Private Link

Description: Service native IP filtering capability for filtering network traffic (not to be
confused with NSG or Azure Firewall). Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Deploy private endpoints for all Azure resources that support
the Private Link feature, to establish a private access point for the resources.

Reference: Azure Private Link for Azure SQL Database and Azure Synapse Analytics

Disable Public Network Access

Description: Service supports disabling public network access either through using
service-level IP ACL filtering rule (not NSG or Azure Firewall) or using a 'Disable Public
Network Access' toggle switch. Learn more.

Supported Enabled By Default Configuration Responsibility

True True Microsoft

Configuration Guidance: No additional configurations are required as this is enabled on


a default deployment.

Reference: Azure SQL connectivity settings

Microsoft Defender for Cloud monitoring

Azure Policy built-in definitions - Microsoft.Sql:

Name
Description Effect(s) Version

(Azure portal) (GitHub)


Name
Description Effect(s) Version

(Azure portal) (GitHub)

Private endpoint Private endpoint connections enforce secure Audit, 1.1.0


connections on communication by enabling private connectivity to Disabled
Azure SQL Azure SQL Database.
Database should
be enabled

Public network Disabling the public network access property improves Audit, 1.1.0
access on Azure security by ensuring your Azure SQL Database can only Deny,
SQL Database be accessed from a private endpoint. This configuration Disabled
should be denies all logins that match IP or virtual network based
disabled firewall rules.

Identity management
For more information, see the Microsoft cloud security benchmark: Identity management.

IM-1: Use centralized identity and authentication system

Features

Azure AD Authentication Required for Data Plane Access

Description: Service supports using Azure AD authentication for data plane access.
Learn more.

Supported Enabled By Default Configuration Responsibility

True False Shared

Feature notes: Azure SQL Database supports multiple data-plane authentication


mechanisms, one of which is AAD.

Configuration Guidance: Use Azure Active Directory (Azure AD) as the default
authentication method to control your data plane access.

Reference: Use Azure Active Directory authentication

Local Authentication Methods for Data Plane Access


Description: Local authentications methods supported for data plane access, such as a
local username and password. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Feature notes: Avoid the usage of local authentication methods or accounts, these
should be disabled wherever possible. Instead use Azure AD to authenticate where
possible.

Configuration Guidance: Restrict the use of local authentication methods for data plane
access. Instead, use Azure Active Directory (Azure AD) as the default authentication
method to control your data plane access.

Reference: Azure SQL Database Access

Microsoft Defender for Cloud monitoring

Azure Policy built-in definitions - Microsoft.Sql:

Name
Description Effect(s) Version

(Azure portal) (GitHub)

An Azure Active Audit provisioning of an Azure Active Directory AuditIfNotExists, 1.0.0


Directory administrator for your SQL server to enable Disabled
administrator Azure AD authentication. Azure AD
should be authentication enables simplified permission
provisioned for management and centralized identity
SQL servers management of database users and other
Microsoft services

IM-3: Manage application identities securely and


automatically

Features

Managed Identities

Description: Data plane actions support authentication using managed identities. Learn
more.
Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Use Azure managed identities instead of service principals


when possible, which can authenticate to Azure services and resources that support
Azure Active Directory (Azure AD) authentication. Managed identity credentials are fully
managed, rotated, and protected by the platform, avoiding hard-coded credentials in
source code or configuration files.

Reference: Managed identities for transparent data encryption with BYOK

Service Principals

Description: Data plane supports authentication using service principals. Learn more.

Supported Enabled By Default Configuration Responsibility

True True Microsoft

Feature notes: Azure SQL DB provides multiple ways to authenticate at the data plane,
one of which is Azure AD and includes managed identities and service principals.

Configuration Guidance: No additional configurations are required as this is enabled on


a default deployment.

Reference: Azure Active Directory service principal with Azure SQL

IM-7: Restrict resource access based on conditions

Features

Conditional Access for Data Plane

Description: Data plane access can be controlled using Azure AD Conditional Access
Policies. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Define the applicable conditions and criteria for Azure Active
Directory (Azure AD) conditional access in the workload. Consider common use cases
such as blocking or granting access from specific locations, blocking risky sign-in
behavior, or requiring organization-managed devices for specific applications.

Reference: Conditional Access with Azure SQL Database

IM-8: Restrict the exposure of credential and secrets

Features

Service Credential and Secrets Support Integration and Storage in


Azure Key Vault

Description: Data plane supports native use of Azure Key Vault for credential and secrets
store. Learn more.

Supported Enabled By Default Configuration Responsibility

False Not Applicable Not Applicable

Feature notes: Cryptographic keys ONLY can be stored in AKV, not secrets nor user
credentials. For example, Transparent Data Encryption protector keys.

Configuration Guidance: This feature is not supported to secure this service.

Privileged access
For more information, see the Microsoft cloud security benchmark: Privileged access.

PA-1: Separate and limit highly privileged/administrative


users

Features

Local Admin Accounts

Description: Service has the concept of a local administrative account. Learn more.

Supported Enabled By Default Configuration Responsibility

False Not Applicable Not Applicable


Feature notes: There is no 'local admin' for Azure SQL DB, there is no sa account either.
The account that sets up the instance is an admin, however.

Configuration Guidance: This feature is not supported to secure this service.

PA-7: Follow just enough administration (least privilege)


principle

Features

Azure RBAC for Data Plane

Description: Azure Role-Based Access Control (Azure RBAC) can be used to managed
access to service's data plane actions. Learn more.

Supported Enabled By Default Configuration Responsibility

False Not Applicable Not Applicable

Feature notes: Azure SQL Database provides a rich, database-specific data-plane


authorization model.

Configuration Guidance: This feature is not supported to secure this service.

PA-8: Determine access process for cloud provider


support

Features

Customer Lockbox

Description: Customer Lockbox can be used for Microsoft support access. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: In support scenarios where Microsoft needs to access your


data, use Customer Lockbox to review, then approve or reject each of Microsoft's data
access requests.
Data protection
For more information, see the Microsoft cloud security benchmark: Data protection.

DP-1: Discover, classify, and label sensitive data

Features

Sensitive Data Discovery and Classification

Description: Tools (such as Azure Purview or Azure Information Protection) can be used
for data discovery and classification in the service. Learn more.

Supported Enabled By Default Configuration Responsibility

True True Microsoft

Configuration Guidance: No additional configurations are required as this is enabled on


a default deployment.

Reference: Data Discovery & Classification

DP-2: Monitor anomalies and threats targeting sensitive


data

Features

Data Leakage/Loss Prevention

Description: Service supports DLP solution to monitor sensitive data movement (in
customer's content). Learn more.

Supported Enabled By Default Configuration Responsibility

False Not Applicable Not Applicable

Feature notes: There are tools that can be used with SQL Server for DLP, but there is no
built-in support.

Configuration Guidance: This feature is not supported to secure this service.


Microsoft Defender for Cloud monitoring
Azure Policy built-in definitions - Microsoft.Sql:

Name
Description Effect(s) Version

(Azure portal) (GitHub)

Azure Defender for SQL should be Audit each SQL Managed AuditIfNotExists, 1.0.2
enabled for unprotected SQL Instance without advanced Disabled
Managed Instances data security.

DP-3: Encrypt sensitive data in transit

Features

Data in Transit Encryption

Description: Service supports data in-transit encryption for data plane. Learn more.

Supported Enabled By Default Configuration Responsibility

True True Microsoft

Configuration Guidance: No additional configurations are required as this is enabled on


a default deployment.

Reference: Minimal TLS version

DP-4: Enable data at rest encryption by default

Features

Data at Rest Encryption Using Platform Keys

Description: Data at-rest encryption using platform keys is supported, any customer
content at rest is encrypted with these Microsoft managed keys. Learn more.

Supported Enabled By Default Configuration Responsibility

True True Microsoft


Configuration Guidance: No additional configurations are required as this is enabled on
a default deployment.

Reference: Transparent data encryption for SQL Database, SQL Managed Instance, and
Azure Synapse Analytics

Microsoft Defender for Cloud monitoring

Azure Policy built-in definitions - Microsoft.Sql:

Name
Description Effect(s) Version

(Azure portal) (GitHub)

Transparent Data Transparent data encryption should be AuditIfNotExists, 2.0.0


Encryption on SQL enabled to protect data-at-rest and Disabled
databases should be meet compliance requirements
enabled

DP-5: Use customer-managed key option in data at rest


encryption when required

Features

Data at Rest Encryption Using CMK

Description: Data at-rest encryption using customer-managed keys is supported for


customer content stored by the service. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: If required for regulatory compliance, define the use case and
service scope where encryption using customer-managed keys are needed. Enable and
implement data at rest encryption using customer-managed key for those services.

Reference: Transparent data encryption for SQL Database, SQL Managed Instance, and
Azure Synapse Analytics

Microsoft Defender for Cloud monitoring


Azure Policy built-in definitions - Microsoft.Sql:
Name
Description Effect(s) Version

(Azure portal) (GitHub)

SQL managed Implementing Transparent Data Encryption (TDE) with Audit, 2.0.0
instances your own key provides you with increased transparency Deny,
should use and control over the TDE Protector, increased security Disabled
customer- with an HSM-backed external service, and promotion of
managed keys separation of duties. This recommendation applies to
to encrypt organizations with a related compliance requirement.
data at rest

SQL servers Implementing Transparent Data Encryption (TDE) with Audit, 2.0.1
should use your own key provides increased transparency and control Deny,
customer- over the TDE Protector, increased security with an HSM- Disabled
managed keys backed external service, and promotion of separation of
to encrypt duties. This recommendation applies to organizations
data at rest with a related compliance requirement.

DP-6: Use a secure key management process

Features

Key Management in Azure Key Vault

Description: The service supports Azure Key Vault integration for any customer keys,
secrets, or certificates. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Shared

Feature notes: Certain features can use AKV for keys, for example, when using Always
Encrypted.

Configuration Guidance: Use Azure Key Vault to create and control the life cycle of your
encryption keys (TDE and Always Encrypted), including key generation, distribution, and
storage. Rotate and revoke your keys in Azure Key Vault and your service based on a
defined schedule or when there is a key retirement or compromise. When there is a
need to use customer-managed key (CMK) in the workload, service, or application level,
ensure you follow the best practices for key management. If you need to bring your own
key (BYOK) to the service (such as importing HSM-protected keys from your on-
premises HSMs into Azure Key Vault), follow recommended guidelines to perform initial
key generation and key transfer.
Reference: Configure Always Encrypted by using Azure Key Vault

Asset management
For more information, see the Microsoft cloud security benchmark: Asset management.

AM-2: Use only approved services

Features

Azure Policy Support

Description: Service configurations can be monitored and enforced via Azure Policy.
Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Use Microsoft Defender for Cloud to configure Azure Policy to
audit and enforce configurations of your Azure resources. Use Azure Monitor to create
alerts when there is a configuration deviation detected on the resources. Use Azure
Policy [deny] and [deploy if not exists] effects to enforce secure configuration across
Azure resources.

Reference: Azure Policy built-in definitions for Azure SQL Database & SQL Managed
Instance

Logging and threat detection


For more information, see the Microsoft cloud security benchmark: Logging and threat
detection.

LT-1: Enable threat detection capabilities

Features

Microsoft Defender for Service / Product Offering


Description: Service has an offering-specific Microsoft Defender solution to monitor and
alert on security issues. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Microsoft Defender for Azure SQL helps you discover and
mitigate potential database vulnerabilities and alerts you to anomalous activities that
may be an indication of a threat to your databases.

Reference: Overview of Microsoft Defender for Azure SQL

Microsoft Defender for Cloud monitoring


Azure Policy built-in definitions - Microsoft.Sql:

Name
Description Effect(s) Version

(Azure portal) (GitHub)

Azure Defender for SQL should be Audit SQL servers without AuditIfNotExists, 2.0.1
enabled for unprotected Azure SQL Advanced Data Security Disabled
servers

Azure Defender for SQL should be Audit each SQL Managed AuditIfNotExists, 1.0.2
enabled for unprotected SQL Instance without advanced Disabled
Managed Instances data security.

LT-3: Enable logging for security investigation

Other guidance for LT-3

Enable logging at the server level as this will filter down to databases, too.

Microsoft Defender for Cloud monitoring


Azure Policy built-in definitions - Microsoft.Sql:

Name
Description Effect(s) Version

(Azure portal) (GitHub)


Name
Description Effect(s) Version

(Azure portal) (GitHub)

Auditing on Auditing on your SQL Server should be enabled AuditIfNotExists, 2.0.0


SQL server to track database activities across all databases Disabled
should be on the server and save them in an audit log.
enabled

LT-4: Enable logging for security investigation

Features

Azure Resource Logs

Description: Service produces resource logs that can provide enhanced service-specific
metrics and logging. The customer can configure these resource logs and send them to
their own data sink like a storage account or log analytics workspace. Learn more.

Supported Enabled By Default Configuration Responsibility

True False Customer

Configuration Guidance: Enable resource logs for the service. For example, Key Vault
supports additional resource logs for actions that get a secret from a key vault or and
Azure SQL has resource logs that track requests to a database. The content of resource
logs varies by the Azure service and resource type.

Reference: Monitoring Azure SQL Database data reference

Backup and recovery


For more information, see the Microsoft cloud security benchmark: Backup and recovery.

BR-1: Ensure regular automated backups

Features

Azure Backup

Description: The service can be backed up by the Azure Backup service. Learn more.
Supported Enabled By Default Configuration Responsibility

False Not Applicable Not Applicable

Configuration Guidance: This feature is not supported to secure this service.

Service Native Backup Capability

Description: Service supports its own native backup capability (if not using Azure
Backup). Learn more.

Supported Enabled By Default Configuration Responsibility

True True Microsoft

Configuration Guidance: No additional configurations are required as this is enabled on


a default deployment.

Reference: Automated backups - Azure SQL Database

Next steps
See the Microsoft cloud security benchmark overview
Learn more about Azure security baselines
SQL vulnerability assessment helps you
identify database vulnerabilities
Article • 06/15/2023

SQL vulnerability assessment is an easy-to-configure service that can discover, track, and
help you remediate potential database vulnerabilities. Use it to proactively improve your
database security for:


Azure SQL Database

Azure SQL Managed Instance

Azure Synapse Analytics

Vulnerability assessment is part of Microsoft Defender for Azure SQL, which is a unified
package for advanced SQL security capabilities. Vulnerability assessment can be
accessed and managed from each SQL database resource in the Azure portal.

7 Note

Vulnerability assessment is supported for Azure SQL Database, Azure SQL Managed
Instance, and Azure Synapse Analytics. Databases in Azure SQL Database, Azure
SQL Managed Instance, and Azure Synapse Analytics are referred to collectively in
the remainder of this article as databases, and the server is referring to the server
that hosts databases for Azure SQL Database and Azure Synapse.

What is SQL vulnerability assessment?


SQL vulnerability assessment is a service that provides visibility into your security state.
Vulnerability assessment includes actionable steps to resolve security issues and
enhance your database security. It can help you to monitor a dynamic database
environment where changes are difficult to track and improve your SQL security posture.

Vulnerability assessment is a scanning service built into Azure SQL Database. The service
employs a knowledge base of rules that flag security vulnerabilities. It highlights
deviations from best practices, such as misconfigurations, excessive permissions, and
unprotected sensitive data.

The rules are based on Microsoft's best practices and focus on the security issues that
present the biggest risks to your database and its valuable data. They cover database-
level issues and server-level security issues, like server firewall settings and server-level
permissions.
Results of the scan include actionable steps to resolve each issue and provide
customized remediation scripts where applicable. You can customize an assessment
report for your environment by setting an acceptable baseline for:

Permission configurations
Feature configurations
Database settings

What are the express and classic


configurations?
You can configure vulnerability assessment for your SQL databases with either:

Express configuration – The default procedure that lets you configure vulnerability
assessment without dependency on external storage to store baseline and scan
result data.

Classic configuration – The legacy procedure that requires you to manage an


Azure storage account to store baseline and scan result data.

What's the difference between the express and classic


configuration?
Configuration modes benefits and limitations comparison:

Parameter Express configuration Classic configuration

Supported • Azure SQL Database


• Azure SQL Database

SQL Flavors • Azure Synapse Dedicated SQL Pools • Azure SQL Managed Instance

(formerly SQL DW) • Azure Synapse Analytics

Supported • Subscription
• Subscription

Policy Scope • Server • Server

• Database

Dependencies None Azure storage account

Recurring scan • Always active


• Configurable on/off

• Scan scheduling is internal and not Scan scheduling is internal and not
configurable configurable

Supported All vulnerability assessment rules for All vulnerability assessment rules for
Rules the supported resource type. the supported resource type.
Parameter Express configuration Classic configuration

Baseline • Batch – several rules in one • Single rule


Settings command

• Set by latest scan results

• Single rule

Apply baseline Will take effect without rescanning the Will take effect only after rescanning
database the database

Single rule Maximum of 1 MB Unlimited


scan result
size

Email • Logic Apps • Internal scheduler

notifications • Logic Apps

Scan export Azure Resource Graph Excel format, Azure Resource Graph

Next steps
Enable SQL vulnerability assessments
Express configuration common questions and Troubleshooting.
Learn more about Microsoft Defender for Azure SQL.
Learn more about data discovery and classification.
Learn more about storing vulnerability assessment scan results in a storage
account accessible behind firewalls and VNets.
Monitor your SQL deployments with SQL Insights (preview)
Article • 09/21/2022

Applies to:
SQL Server on Azure VM
Azure SQL Database
Azure SQL Managed Instance

SQL Insights (preview) is a comprehensive solution for monitoring any product in the Azure SQL family. SQL Insights uses dynamic
management views to expose the data that you need to monitor health, diagnose problems, and tune performance.

SQL Insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and
remotely gather data. The gathered data is stored in Azure Monitor Logs to enable easy aggregation, filtering, and trend analysis. You
can view the collected data from the SQL Insights workbook template, or you can delve directly into the data by using log queries.

The following diagram details the steps taken by information from the database engine and Azure resource logs, and how they can
be surfaced. For a more detailed diagram of Azure SQL logging, see Monitoring and diagnostic telemetry.

Logs
Log alerts
Gathered Collection InsightsMetrics SQL Insights
Database engine Agent Workbooks
telemetry Table
Log
VM Analytics

SQL Database,
SQL Managed Instance, or
SQL Server on Azure VMs

Pricing
There is no direct cost for SQL Insights (preview). All costs are incurred by the virtual machines that gather the data, the Log Analytics
workspaces that store the data, and any alert rules configured on the data.

Virtual machines
For virtual machines, you're charged based on the pricing published on the virtual machines pricing page . The number of virtual
machines that you need will vary based on the number of connection strings you want to monitor. We recommend allocating one
virtual machine of size Standard_B2s for every 100 connection strings. For more information, see Azure virtual machine requirements.

Log Analytics workspaces


For the Log Analytics workspaces, you're charged based on the pricing published on the Azure Monitor pricing page . The Log
Analytics workspaces that SQL Insights uses will incur costs for data ingestion, data retention, and (optionally) data export.

Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data will vary based on
your database activity and the collection settings defined in your monitoring profiles.

Alert rules
For alert rules in Azure Monitor, you're charged based on the pricing published on the Azure Monitor pricing page . If you choose
to create alerts with SQL Insights (preview), you're charged for any alert rules created and any notifications sent.

Supported versions
SQL Insights (preview) supports the following versions of SQL Server:

SQL Server 2012 and newer

SQL Insights (preview) supports SQL Server running in the following environments:

Azure SQL Database


Azure SQL Managed Instance
SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the SQL virtual machine provider)
Azure VMs (SQL Server running on virtual machines not registered with the SQL virtual machine provider)
SQL Insights (preview) has no support or has limited support for the following:

Non-Azure instances: SQL Server running on virtual machines outside Azure is not supported.
Azure SQL Database elastic pools: Metrics can't be gathered for elastic pools or for databases within elastic pools.
Azure SQL Database low service tiers: Metrics can't be gathered for databases on Basic, S0, S1, and S2 service tiers.
Azure SQL Database serverless tier: Metrics can be gathered for databases through the serverless compute tier. However, the
process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state.
Secondary replicas: Metrics can be gathered for only a single secondary replica per database. If a database has more than one
secondary replica, only one can be monitored.
Authentication with Azure Active Directory: The only supported method of authentication for monitoring is SQL authentication.
For SQL Server on Azure Virtual Machines, authentication through Active Directory on a custom domain controller is not
supported.

Regional availability
SQL Insights (preview) is available in all Azure regions where Azure Monitor is available , with the exception of Azure Government
and national clouds.

Open SQL Insights


To open SQL Insights (preview):

1. In the Azure portal, go to the Azure Monitor menu.


2. In the Insights section, select SQL (preview).
3. Select a tile to load the experience for the SQL resource that you're monitoring.

For more instructions, see Enable SQL Insights (preview) and Troubleshoot SQL Insights (preview).

Collected data
SQL Insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL Server.

SQL Insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual
machine has the Azure Monitor agent and the Workload Insights (WLI) extension installed.
The WLI extension includes the open-source Telegraf agent . SQL Insights uses data collection rules to specify the data collection
settings for Telegraf's SQL Server plug-in .

Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The following tables
describe the available data. You can customize which datasets to collect and the frequency of collection when you create a
monitoring profile.

The tables have the following columns:

Friendly name: Name of the query as shown in the Azure portal when you're creating a monitoring profile.
Configuration name: Name of the query as shown in the Azure portal when you're editing a monitoring profile.
Namespace: Name of the query as found in a Log Analytics workspace. This identifier appears in the InsighstMetrics table on
the Namespace property in the Tags column.
DMVs: Dynamic managed views that are used to produce the dataset.
Enabled by default: Whether the data is collected by default.
Default collection frequency: How often the data is collected by default.

Data for Azure SQL Database

Friendly Configuration name Namespace DMVs Enabled Default


name by collection
default frequency

DB wait stats AzureSQLDBWaitStats sqlserver_azuredb_waitstats sys.dm_db_wait_stats No Not


applicable

DBO wait AzureSQLDBOsWaitstats sqlserver_waitstats sys.dm_os_wait_stats Yes 60


stats seconds

Memory AzureSQLDBMemoryClerks sqlserver_memory_clerks sys.dm_os_memory_clerks Yes 60


clerks seconds

Database I/O AzureSQLDBDatabaseIO sqlserver_database_io sys.dm_io_virtual_file_stats


Yes 60
sys.database_files
seconds
tempdb.sys.database_files

Server AzureSQLDBServerProperties sqlserver_server_properties sys.dm_os_job_object


Yes 60
properties sys.database_files
seconds
sys.databases

sys.database_service_objectives

Performance AzureSQLDBPerformanceCounters sqlserver_performance sys.dm_os_performance_counters


Yes 60
counters sys.databases seconds

Resource AzureSQLDBResourceStats sqlserver_azure_db_resource_stats sys.dm_db_resource_stats Yes 60


stats seconds

Resource AzureSQLDBResourceGovernance sqlserver_db_resource_governance sys.dm_user_db_resource_governance Yes 60


governance seconds

Requests AzureSQLDBRequests sqlserver_requests sys.dm_exec_sessions


No Not
sys.dm_exec_requests
applicable
sys.dm_exec_sql_text

Schedulers AzureSQLDBSchedulers sqlserver_schedulers sys.dm_os_schedulers No Not


applicable

Data for Azure SQL Managed Instance

Friendly Configuration name Namespace DMVs Enabled Default


name by collection
default frequency

Wait stats AzureSQLMIOsWaitstats sqlserver_waitstats sys.dm_os_wait_stats Yes 60


seconds

Memory AzureSQLMIMemoryClerks sqlserver_memory_clerks sys.dm_os_memory_clerks Yes 60


clerks seconds
Friendly Configuration name Namespace DMVs Enabled Default
name by collection
default frequency

Database AzureSQLMIDatabaseIO sqlserver_database_io sys.dm_io_virtual_file_stats


Yes 60
I/O sys.master_files seconds

Server AzureSQLMIServerProperties sqlserver_server_properties sys.server_resource_stats Yes 60


properties seconds

Performance AzureSQLMIPerformanceCounters sqlserver_performance sys.dm_os_performance_counters


Yes 60
counters sys.databases seconds

Resource AzureSQLMIResourceStats sqlserver_azure_db_resource_stats sys.server_resource_stats Yes 60


stats seconds

Resource AzureSQLMIResourceGovernance sqlserver_instance_resource_governance sys.dm_instance_resource_governance Yes 60


governance seconds

Requests AzureSQLMIRequests sqlserver_requests sys.dm_exec_sessions


No NA
sys.dm_exec_requests

sys.dm_exec_sql_text

Schedulers AzureSQLMISchedulers sqlserver_schedulers sys.dm_os_schedulers No Not


applicable

Data for SQL Server

Friendly Configuration name Namespace DMVs Enabled Default


name by collection
default frequency

Wait stats SQLServerWaitStatsCategorized sqlserver_waitstats sys.dm_os_wait_stats Yes 60


seconds

Memory SQLServerMemoryClerks sqlserver_memory_clerks sys.dm_os_memory_clerks Yes 60


clerks seconds

Database SQLServerDatabaseIO sqlserver_database_io sys.dm_io_virtual_file_stats


Yes 60
I/O sys.master_files seconds

Server SQLServerProperties sqlserver_server_properties sys.dm_os_sys_info Yes 60


properties seconds

Performance SQLServerPerformanceCounters sqlserver_performance sys.dm_os_performance_counters Yes 60


counters seconds

Volume SQLServerVolumeSpace sqlserver_volume_space sys.master_files Yes 60


space seconds

SQL Server SQLServerCpu sqlserver_cpu sys.dm_os_ring_buffers Yes 60


CPU seconds

Schedulers SQLServerSchedulers sqlserver_schedulers sys.dm_os_schedulers No Not


applicable

Requests SQLServerRequests sqlserver_requests sys.dm_exec_sessions


No Not
sys.dm_exec_requests
applicable
sys.dm_exec_sql_text

Availability SQLServerAvailabilityReplicaStates sqlserver_hadr_replica_states sys.dm_hadr_availability_replica_states


No 60
replica sys.availability_replicas
seconds
states sys.availability_groups

sys.dm_hadr_availability_group_states

Availability SQLServerDatabaseReplicaStates sqlserver_hadr_dbreplica_states sys.dm_hadr_database_replica_states


No 60
database sys.availability_replicas seconds
replicas

Next steps
For frequently asked questions about SQL Insights (preview), see Frequently asked questions.
Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance
Tutorial: Getting started with Always
Encrypted
Article • 02/28/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

This tutorial teaches you how to get started with Always Encrypted. It will show you:

" How to encrypt selected columns in your database.


" How to query encrypted columns.

7 Note

If you're looking for information on Always Encrypted with secure enclaves, see
the following tutorials instead:

Getting started using Always Encrypted with secure enclaves


Tutorial: Getting started using Always Encrypted with secure enclaves in
SQL Server

Prerequisites
For this tutorial, you need:

An empty database in Azure SQL Database, Azure SQL Managed Instance, or SQL
Server. The below instructions assume the database name is ContosoHR. You need
to be an owner of the database (a member of the db_owner role). For information
on how to create a database, see Quickstart: Create a single database - Azure SQL
Database or Create a database in SQL Server.
Optional, but recommended, especially if your database is in Azure: a key vault in
Azure Key Vault. For information on how to create a key vault, see Quickstart:
Create a key vault using the Azure portal.
If your key vault uses the access policy permissions model, make sure you have
the following key permissions in the key vault: get , list , create , unwrap key ,
wrap key , verify , sign . See Assign a Key Vault access policy.

If you're using the Azure role-based access control (RBAC) permission model,
make you sure you're a member of the Key Vault Crypto Officer role for your
key vault. See Provide access to Key Vault keys, certificates, and secrets with an
Azure role-based access control.
The latest version of SQL Server Management Studio (SSMS) or the latest version
of the SqlServer and Az PowerShell modules. The Az PowerShell module is required
only if you're using Azure Key Vault.

Step 1: Create and populate the database


schema
In this step, you'll create the HR schema and the Employees table. Then, you'll populate
the table with some data.

SSMS

1. Connect to your database. For instructions on how to connect to a database


from SSMS, see Quickstart: Connect and query an Azure SQL Database or an
Azure SQL Managed Instance using SQL Server Management Studio (SSMS) or
Quickstart: Connect and query a SQL Server instance using SQL Server
Management Studio (SSMS).

2. Open a new query window for the ContosoHR database.

3. Paste in and execute the below statements to create a new table, named
Employees.

SQL

CREATE SCHEMA [HR];

GO

CREATE TABLE [HR].[Employees]

[EmployeeID] [int] IDENTITY(1,1) NOT NULL

, [SSN] [char](11) NOT NULL

, [FirstName] [nvarchar](50) NOT NULL

, [LastName] [nvarchar](50) NOT NULL

, [Salary] [money] NOT NULL

) ON [PRIMARY];

4. Paste in and execute the below statements to add a few employee records to
the Employees table.

SQL

INSERT INTO [HR].[Employees]

[SSN]

, [FirstName]

, [LastName]

, [Salary]

VALUES

'795-73-9838'

, N'Catherine'

, N'Abel'

, $31692

);

INSERT INTO [HR].[Employees]

[SSN]

, [FirstName]

, [LastName]

, [Salary]

VALUES

'990-00-6818'

, N'Kim'

, N'Abercrombie'

, $55415

);

Step 2: Encrypt columns


In this step, you'll provision a column master key and a column encryption key for
Always Encrypted. Then, you'll encrypt the SSN and Salary columns in the Employees
table.

SSMS

SSMS provides a wizard that helps you easily configure Always Encrypted by setting
up a column master key, a column encryption key, and encrypt selected columns.

1. In Object Explorer, expand Databases > ContosoHR > Tables.

2. Right-click the Employees table and select Encrypt Columns to open the
Always Encrypted wizard.
3. Select Next on the Introduction page of the wizard.

4. On the Column Selection page.


a. Select the SSN and Salary columns. Choose deterministic encryption for
the SSN column and randomized encryption for the Salary column.
Deterministic encryption supports queries, such as point lookup searches
that involve equality comparisons on encrypted columns. Randomized
encryption doesn't support any computations on encrypted columns.
b. Leave CEK-Auto1 (New) as the column encryption key for both columns.
This key doesn't exist yet and will be generated by the wizard.
c. Select Next.
5. On the Master Key Configuration page, configure a new column master key
that will be generated by the wizard. First, you need to select where you want
to store your column master key. The wizard supports two key store types:

Azure Key Vault - recommended if your database is in Azure


Windows certificate store

In general, Azure Key Vault is the recommended option, especially if your


database is in Azure.

To use Azure Key Vault:


a. Select Azure Key Vault.
b. Select Sign in and complete signing in to Azure.
c. After you've signed in, the page will display the list of subscriptions
and key vaults, you have access to. Select an Azure subscription
containing the key vault, you want to use.
d. Select your key vault.
e. Select Next.
To use Windows certificate store:

a. Select Windows certificate store.

b. Leave the default selection of Current User - this will instruct the
wizard to generate a certificate (your new column master key) in the
Current User store.
c. Select Next.

6. On the In-Place Encryption Settings page, no additional configuration is


required because the database does not have an enclave enabled. Select Next.

7. On the Run Settings page, you're asked if you want to proceed with
encryption or generate a PowerShell script to be executed later. Leave the
default settings and select Next.

8. On the Summary page, the wizard informs you about the actions it will
execute. Check all the information is correct and select Finish.

9. On the Results page, you can monitor the progress of the wizard's operations.
Wait until all operations complete successfully and select Close.
10. (Optional) Explore the changes the wizard has made in your database.

a. Expand ContosoHR > Security > Always Encrypted Keys to explore the
metadata objects for the column master key and the column encryption
that the wizard created.

b. You can also run the below queries against the system catalog views that
contain key metadata.

SQL

SELECT * FROM sys.column_master_keys;

SELECT * FROM sys.column_encryption_keys

SELECT * FROM sys.column_encryption_key_values

c. In Object Explorer, right-click the Employees table and select Script Table
as > CREATE To > New Query Editor Window. This will open a new query
window with the CREATE TABLE statement for the Employees table. Note
the ENCRYPTED WITH clause that appears in the definitions of the SSN
and Salary columns.
d. You can also run the below query against sys.columns to retrieve column-
level encryption metadata for the two encrypted columns.

SQL

SELECT

[name]

, [encryption_type]

, [encryption_type_desc]

, [encryption_algorithm_name]

, [column_encryption_key_id]

FROM sys.columns

WHERE [encryption_type] IS NOT NULL;

Step 3: Query encrypted columns


SSMS

1. Connect to your database with Always Encrypted disabled for your


connection.
a. Open a new query window.
b. Right-click anywhere in the query window and select Connection > Change
Connection. This will open the Connect to Database Engine dialog.
c. Select Options <<. This will show additional tabs in the Connect to
Database Engine dialog.
d. Select the Always Encrypted tab.
e. Make sure Enable Always Encrypted (column encryption) isn't selected.
f. Select Connect.
2. Paste in and execute the following query. The query should return binary
encrypted data.

SQL

SELECT [SSN], [Salary] FROM [HR].[Employees]

3. Connect to your database with Always Encrypted enabled for your connection.
a. Right-click anywhere in the query window and select Connection > Change
Connection. This will open the Connect to Database Engine dialog.
b. Select Options <<. This will show additional tabs in the Connect to
Database Engine dialog.
c. Select the Always Encrypted tab.
d. Select Enable Always Encrypted (column encryption).
e. Select Connect.
4. Rerun the same query. Since you're connected with Always Encrypted enabled
for your database connection, the client driver in SSMS will attempt to decrypt
data stored in both encrypted columns. If you use Azure Key Vault, you may
be prompted to sign into Azure.

5. Enable Parameterization for Always Encrypted. This feature allows you to run
queries that filter data by encrypted columns (or insert data to encrypted
columns).
a. Select Query from the main menu of SSMS.
b. Select Query Options....
c. Navigate to Execution > Advanced.
d. Make sure Enable Parameterization for Always Encrypted is checked.
e. Select OK.
6. Paste in and execute the below query, which filters data by the encrypted SSN
column. The query should return one row containing plaintext values.

SQL

DECLARE @SSN [char](11) = '795-73-9838'

SELECT [SSN], [Salary] FROM [HR].[Employees]

WHERE [SSN] = @SSN

7. Optionally, if you're using Azure Key Vault configured with the access policy
permissions model, follow the below steps to see what happens when a user
tries to retrieve plaintext data from encrypted columns without having access
to the column master key protecting the data.
a. Remove the key unwrap permission for yourself in the access policy for your
key vault. For more information, see Assign a Key Vault access policy.
b. Since the client driver in SSMS caches the column encryption keys acquired
from a key vault for 2 hours, close SSMS and open it again. This will ensure
the key cache is empty.
c. Connect to your database with Always Encrypted enabled for your
connection.
d. Paste in and execute the following query. The query should fail with the
error message indicating you're missing the required unwrap permission.

SQL

SELECT [SSN], [Salary] FROM [HR].[Employees]

Next steps
Develop applications using Always Encrypted

See also
Always Encrypted documentation
Always Encrypted with secure enclaves documentation
Provision Always Encrypted keys using SQL Server Management Studio
Configure Always Encrypted using PowerShell
Always Encrypted wizard
Query columns using Always Encrypted with SQL Server Management Studio
Copy and transform data in Azure SQL Database by using Azure
Data Factory or Azure Synapse Analytics
Article • 04/06/2023

APPLIES TO:
Azure Data Factory
Azure Synapse Analytics

This article outlines how to use Copy Activity in Azure Data Factory or Azure Synapse pipelines to copy data from and to Azure SQL
Database, and use Data Flow to transform data in Azure SQL Database. To learn more, read the introductory article for Azure Data Factory
or Azure Synapse Analytics.

Supported capabilities
This Azure SQL Database connector is supported for the following capabilities:

Supported capabilities IR Managed private endpoint

Copy activity (source/sink) ①② ✓

Mapping data flow (source/sink) ① ✓

Lookup activity ①② ✓

GetMetadata activity ①② ✓

Script activity ①② ✓

Stored procedure activity ①② ✓

① Azure integration runtime ② Self-hosted integration runtime


For Copy activity, this Azure SQL Database connector supports these functions:

Copying data by using SQL authentication and Azure Active Directory (Azure AD) Application token authentication with a service
principal or managed identities for Azure resources.
As a source, retrieving data by using a SQL query or a stored procedure. You can also choose to parallel copy from an Azure SQL
Database source, see the Parallel copy from SQL database section for details.
As a sink, automatically creating destination table if not exists based on the source schema; appending data to a table or invoking a
stored procedure with custom logic during the copy.

If you use Azure SQL Database serverless tier, note when the server is paused, activity run fails instead of waiting for the auto resume to be
ready. You can add activity retry or chain additional activities to make sure the server is live upon the actual execution.

) Important

If you copy data by using the Azure integration runtime, configure a server-level firewall rule so that Azure services can access the
server.
If you copy data by using a self-hosted integration runtime, configure the firewall to allow the appropriate IP range. This range
includes the machine's IP that's used to connect to Azure SQL Database.

Get started
To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:

The Copy Data tool


The Azure portal
The .NET SDK
The Python SDK
Azure PowerShell
The REST API
The Azure Resource Manager template

Create an Azure SQL Database linked service using UI


Use the following steps to create an Azure SQL Database linked service in the Azure portal UI.
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:

Azure Data Factory

2. Search for SQL and select the Azure SQL Database connector.
3. Configure the service details, test the connection, and create the new linked service.
Connector configuration details
The following sections provide details about properties that are used to define Azure Data Factory or Synapse pipeline entities specific to
an Azure SQL Database connector.

Linked service properties


These generic properties are supported for an Azure SQL Database linked service:

Property Description Required

type The type property must be set to AzureSqlDatabase. Yes

connectionString Specify information needed to connect to the Azure SQL Database instance for the connectionString property.
Yes
You also can put a password or service principal key in Azure Key Vault. If it's SQL authentication, pull the password
configuration out of the connection string. For more information, see the JSON example following the table and
Store credentials in Azure Key Vault.

azureCloudType For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application No
is registered.

Allowed values are AzurePublic, AzureChina, AzureUsGovernment, and AzureGermany. By default, the data factory
or Synapse pipeline's cloud environment is used.

alwaysEncryptedSettings Specify alwaysencryptedsettings information that's needed to enable Always Encrypted to protect sensitive data No
stored in SQL server by using either managed identity or service principal. For more information, see the JSON
example following the table and Using Always Encrypted section. If not specified, the default always encrypted
setting is disabled.
Property Description Required

connectVia This integration runtime is used to connect to the data store. You can use the Azure integration runtime or a self- No
hosted integration runtime if your data store is located in a private network. If not specified, the default Azure
integration runtime is used.

For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:

SQL authentication
Service principal authentication
System-assigned managed identity authentication
User-assigned managed identity authentication

 Tip

If you hit an error with the error code "UserErrorFailedToConnectToSqlServer" and a message like "The session limit for the database is
XXX and has been reached," add Pooling=false to your connection string and try again. Pooling=false is also recommended for
SHIR(Self Hosted Integration Runtime) type linked service setup. Pooling and other connection parameters can be added as new
parameter names and values in Additional connection properties section of linked service creation form.

SQL authentication
To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.

Example: using SQL authentication

JSON

"name": "AzureSqlDbLinkedService",

"properties": {

"type": "AzureSqlDatabase",

"typeProperties": {

"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=<databasename>;User


ID=<username>@<servername>;Password=<password>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30"

},

"connectVia": {

"referenceName": "<name of Integration Runtime>",

"type": "IntegrationRuntimeReference"

Example: password in Azure Key Vault

JSON

"name": "AzureSqlDbLinkedService",

"properties": {

"type": "AzureSqlDatabase",

"typeProperties": {

"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=<databasename>;User


ID=<username>@<servername>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30",

"password": {

"type": "AzureKeyVaultSecret",

"store": {

"referenceName": "<Azure Key Vault linked service name>",

"type": "LinkedServiceReference"

},

"secretName": "<secretName>"

},

"connectVia": {

"referenceName": "<name of Integration Runtime>",

"type": "IntegrationRuntimeReference"

Example: Use Always Encrypted


JSON

"name": "AzureSqlDbLinkedService",

"properties": {

"type": "AzureSqlDatabase",

"typeProperties": {

"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=<databasename>;User


ID=<username>@<servername>;Password=<password>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30"

},

"alwaysEncryptedSettings": {

"alwaysEncryptedAkvAuthType": "ServicePrincipal",

"servicePrincipalId": "<service principal id>",

"servicePrincipalKey": {

"type": "SecureString",

"value": "<service principal key>"

},

"connectVia": {

"referenceName": "<name of Integration Runtime>",

"type": "IntegrationRuntimeReference"

Service principal authentication


To use service principal authentication, in addition to the generic properties that are described in the preceding section, specify the
following properties:

Property Description Required

servicePrincipalId Specify the application's client ID. Yes

servicePrincipalKey Specify the application's key. Mark this field as SecureString to store it securely or reference a secret stored in Azure Key Yes
Vault.

tenant Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by Yes
hovering the mouse in the upper-right corner of the Azure portal.

You also need to follow the steps below:

1. Create an Azure Active Directory application from the Azure portal. Make note of the application name and the following values that
define the linked service:

Application ID
Application key
Tenant ID

2. Provision an Azure Active Directory administrator for your server on the Azure portal if you haven't already done so. The Azure AD
administrator must be an Azure AD user or Azure AD group, but it can't be a service principal. This step is done so that, in the next
step, you can use an Azure AD identity to create a contained database user for the service principal.

3. Create contained database users for the service principal. Connect to the database from or to which you want to copy data by using
tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following
T-SQL:

SQL

CREATE USER [your application name] FROM EXTERNAL PROVIDER;

4. Grant the service principal needed permissions as you normally do for SQL users or others. Run the following code. For more options,
see this document.

SQL

ALTER ROLE [role name] ADD MEMBER [your application name];

5. Configure an Azure SQL Database linked service in an Azure Data Factory or Synapse workspace.
Linked service example that uses service principal authentication

JSON

"name": "AzureSqlDbLinkedService",

"properties": {

"type": "AzureSqlDatabase",

"typeProperties": {

"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=


<databasename>;Connection Timeout=30",

"servicePrincipalId": "<service principal id>",

"servicePrincipalKey": {

"type": "SecureString",

"value": "<service principal key>"

},

"tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>"

},

"connectVia": {

"referenceName": "<name of Integration Runtime>",

"type": "IntegrationRuntimeReference"

System-assigned managed identity authentication


A data factory or Synapse workspace can be associated with a system-assigned managed identity for Azure resources that represents the
service when authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The
designated factory or Synapse workspace can access and copy data from or to your database by using this identity.

To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and
follow these steps.

1. Provision an Azure Active Directory administrator for your server on the Azure portal if you haven't already done so. The Azure AD
administrator can be an Azure AD user or an Azure AD group. If you grant the group with managed identity an admin role, skip steps
3 and 4. The administrator has full access to the database.

2. Create contained database users for the managed identity. Connect to the database from or to which you want to copy data by using
tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following
T-SQL:

SQL

CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER;

3. Grant the managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more
options, see this document.

SQL

ALTER ROLE [role name] ADD MEMBER [your_resource_name];

4. Configure an Azure SQL Database linked service.

Example

JSON

"name": "AzureSqlDbLinkedService",

"properties": {

"type": "AzureSqlDatabase",

"typeProperties": {

"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=


<databasename>;Connection Timeout=30"

},

"connectVia": {

"referenceName": "<name of Integration Runtime>",

"type": "IntegrationRuntimeReference"

User-assigned managed identity authentication


A data factory or Synapse workspace can be associated with a user-assigned managed identities that represents the service when
authenticating to other resources in Azure. You can use this managed identity for Azure SQL Database authentication. The designated
factory or Synapse workspace can access and copy data from or to your database by using this identity.

To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section,
specify the following properties:

Property Description Required

credentials Specify the user-assigned managed identity as the credential object. Yes

You also need to follow the steps below:

1. Provision an Azure Active Directory administrator for your server on the Azure portal if you haven't already done so. The Azure AD
administrator can be an Azure AD user or an Azure AD group. If you grant the group with user-assigned managed identity an admin
role, skip steps 3. The administrator has full access to the database.

2. Create contained database users for the user-assigned managed identity. Connect to the database from or to which you want to copy
data by using tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run
the following T-SQL:

SQL

CREATE USER [your_resource_name] FROM EXTERNAL PROVIDER;

3. Create one or multiple user-assigned managed identities and grant the user-assigned managed identity needed permissions as you
normally do for SQL users and others. Run the following code. For more options, see this document.

SQL

ALTER ROLE [role name] ADD MEMBER [your_resource_name];

4. Assign one or multiple user-assigned managed identities to your data factory and create credentials for each user-assigned managed
identity.

5. Configure an Azure SQL Database linked service.

Example:

JSON

"name": "AzureSqlDbLinkedService",

"properties": {

"type": "AzureSqlDatabase",

"typeProperties": {

"connectionString": "Data Source=tcp:<servername>.database.windows.net,1433;Initial Catalog=


<databasename>;Connection Timeout=30",

"credential": {

"referenceName": "credential1",

"type": "CredentialReference"

},

"connectVia": {

"referenceName": "<name of Integration Runtime>",

"type": "IntegrationRuntimeReference"

Dataset properties
For a full list of sections and properties available to define datasets, see Datasets.
The following properties are supported for Azure SQL Database dataset:

Property Description Required

type The type property of the dataset must be set to AzureSqlTable. Yes

schema Name of the schema. No for source, Yes for


sink

table Name of the table/view. No for source, Yes for


sink

tableName Name of the table/view with schema. This property is supported for backward compatibility. For new workload, use No for source, Yes for
schema and table . sink

Dataset properties example


JSON

"name": "AzureSQLDbDataset",

"properties":

"type": "AzureSqlTable",

"linkedServiceName": {

"referenceName": "<Azure SQL Database linked service name>",

"type": "LinkedServiceReference"

},

"schema": [ < physical schema, optional, retrievable during authoring > ],

"typeProperties": {

"schema": "<schema_name>",

"table": "<table_name>"

Copy activity properties


For a full list of sections and properties available for defining activities, see Pipelines. This section provides a list of properties supported by
the Azure SQL Database source and sink.

Azure SQL Database as the source

 Tip

To load data from Azure SQL Database efficiently by using data partitioning, learn more from Parallel copy from SQL database.

To copy data from Azure SQL Database, the following properties are supported in the copy activity source section:

Property Description Required

type The type property of the copy activity source must be set to AzureSqlSource. "SqlSource" type is still Yes
supported for backward compatibility.

sqlReaderQuery This property uses the custom SQL query to read data. An example is select * from MyTable . No

sqlReaderStoredProcedureName The name of the stored procedure that reads data from the source table. The last SQL statement must be a No
SELECT statement in the stored procedure.

storedProcedureParameters Parameters for the stored procedure.


No
Allowed values are name or value pairs. The names and casing of parameters must match the names and
casing of the stored procedure parameters.

isolationLevel Specifies the transaction locking behavior for the SQL source. The allowed values are: ReadCommitted, No
ReadUncommitted, RepeatableRead, Serializable, Snapshot. If not specified, the database's default isolation
level is used. Refer to this doc for more details.
Property Description Required

partitionOptions Specifies the data partitioning options used to load data from Azure SQL Database.
No
Allowed values are: None (default), PhysicalPartitionsOfTable, and DynamicRange.

When a partition option is enabled (that is, not None ), the degree of parallelism to concurrently load data
from an Azure SQL Database is controlled by the parallelCopies setting on the copy activity.

partitionSettings Specify the group of the settings for data partitioning.


No
Apply when the partition option isn't None .

Under partitionSettings :

partitionColumnName Specify the name of the source column in integer or date/datetime type ( int , smallint , bigint , date , No
smalldatetime , datetime , datetime2 , or datetimeoffset ) that will be used by range partitioning for parallel
copy. If not specified, the index or the primary key of the table is autodetected and used as the partition
column.
Apply when the partition option is DynamicRange . If you use a query to retrieve the source data, hook ?
AdfDynamicRangePartitionCondition in the WHERE clause. For an example, see the Parallel copy from SQL
database section.

partitionUpperBound The maximum value of the partition column for partition range splitting. This value is used to decide the No
partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and
copied. If not specified, copy activity auto detect the value.
Apply when the partition option is DynamicRange . For an example, see the Parallel copy from SQL database
section.

partitionLowerBound The minimum value of the partition column for partition range splitting. This value is used to decide the No
partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and
copied. If not specified, copy activity auto detect the value.

Apply when the partition option is DynamicRange . For an example, see the Parallel copy from SQL database
section.

Note the following points:

If sqlReaderQuery is specified for AzureSqlSource, the copy activity runs this query against the Azure SQL Database source to get the
data. You also can specify a stored procedure by specifying sqlReaderStoredProcedureName and storedProcedureParameters if the
stored procedure takes parameters.
When using stored procedure in source to retrieve data, note if your stored procedure is designed as returning different schema when
different parameter value is passed in, you may encounter failure or see unexpected result when importing schema from UI or when
copying data to SQL database with auto table creation.

SQL query example

JSON

"activities":[

"name": "CopyFromAzureSQLDatabase",

"type": "Copy",
"inputs": [

"referenceName": "<Azure SQL Database input dataset name>",

"type": "DatasetReference"

],

"outputs": [

"referenceName": "<output dataset name>",

"type": "DatasetReference"

],

"typeProperties": {

"source": {

"type": "AzureSqlSource",

"sqlReaderQuery": "SELECT * FROM MyTable"

},

"sink": {

"type": "<sink type>"

Stored procedure example

JSON

"activities":[

"name": "CopyFromAzureSQLDatabase",

"type": "Copy",
"inputs": [

"referenceName": "<Azure SQL Database input dataset name>",

"type": "DatasetReference"

],

"outputs": [

"referenceName": "<output dataset name>",

"type": "DatasetReference"

],

"typeProperties": {

"source": {

"type": "AzureSqlSource",

"sqlReaderStoredProcedureName": "CopyTestSrcStoredProcedureWithParameters",

"storedProcedureParameters": {

"stringData": { "value": "str3" },

"identifier": { "value": "$$Text.Format('{0:yyyy}', <datetime parameter>)", "type": "Int"}

},

"sink": {

"type": "<sink type>"

Stored procedure definition


SQL

CREATE PROCEDURE CopyTestSrcStoredProcedureWithParameters

@stringData varchar(20),

@identifier int

AS

SET NOCOUNT ON;

BEGIN

select *

from dbo.UnitTestSrcTable

where dbo.UnitTestSrcTable.stringData != stringData

and dbo.UnitTestSrcTable.identifier != identifier

END

GO

Azure SQL Database as the sink

 Tip

Learn more about the supported write behaviors, configurations, and best practices from Best practice for loading data into Azure
SQL Database.

To copy data to Azure SQL Database, the following properties are supported in the copy activity sink section:

Property Description

type The type property of the copy activity sink must be set to AzureSqlSink. "SqlSink" type is still supported for backward com

preCopyScript Specify a SQL query for the copy activity to run before writing data into Azure SQL Database. It's invoked only once per c
preloaded data.
Property Description

tableOption Specifies whether to automatically create the sink table if not exists based on the source schema.

Auto table creation is not supported when sink specifies stored procedure.

Allowed values are: none (default), autoCreate .

sqlWriterStoredProcedureName The name of the stored procedure that defines how to apply source data into a target table.

This stored procedure is invoked per batch. For operations that run only once and have nothing to do with source data, fo
preCopyScript property.

See example from Invoke a stored procedure from a SQL sink.

storedProcedureTableTypeParameterName The parameter name of the table type specified in the stored procedure.

sqlWriterTableType The table type name to be used in the stored procedure. The copy activity makes the data being moved available in a tem
procedure code can then merge the data that's being copied with existing data.

storedProcedureParameters Parameters for the stored procedure.

Allowed values are name and value pairs. Names and casing of parameters must match the names and casing of the store

writeBatchSize Number of rows to insert into the SQL table per batch.

The allowed value is integer (number of rows). By default, the service dynamically determines the appropriate batch size b

writeBatchTimeout The wait time for the batch insert operation to finish before it times out.

The allowed value is timespan. An example is "00:30:00" (30 minutes).

disableMetricsCollection The service collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations,
access. If you are concerned with this behavior, specify true to turn it off.

 maxConcurrentConnections  The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when

WriteBehavior Specify the write behavior for copy activity to load data into Azure SQL Database.

The allowed value is Insert and Upsert. By default, the service uses insert to load data.

upsertSettings Specify the group of the settings for write behavior.

Apply when the WriteBehavior option is Upsert .

Under upsertSettings :

useTempDB Specify whether to use the a global temporary table or physical table as the interim table for upsert.

By default, the service uses global temporary table as the interim table. value is true .

interimSchemaName Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for
interim table will share the same schema as sink table.

Apply when the useTempDB option is False .

keys Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified

Example 1: Append data

JSON

"activities":[

"name": "CopyToAzureSQLDatabase",

"type": "Copy",
"inputs": [

"referenceName": "<input dataset name>",

"type": "DatasetReference"

],

"outputs": [

"referenceName": "<Azure SQL Database output dataset name>",

"type": "DatasetReference"

],

"typeProperties": {

"source": {

"type": "<source type>"

},

"sink": {

"type": "AzureSqlSink",

"tableOption": "autoCreate",

"writeBatchSize": 100000

Example 2: Invoke a stored procedure during copy

Learn more details from Invoke a stored procedure from a SQL sink.

JSON

"activities":[

"name": "CopyToAzureSQLDatabase",

"type": "Copy",
"inputs": [

"referenceName": "<input dataset name>",

"type": "DatasetReference"

],

"outputs": [

"referenceName": "<Azure SQL Database output dataset name>",

"type": "DatasetReference"

],

"typeProperties": {

"source": {

"type": "<source type>"

},

"sink": {

"type": "AzureSqlSink",

"sqlWriterStoredProcedureName": "CopyTestStoredProcedureWithParameters",

"storedProcedureTableTypeParameterName": "MyTable",

"sqlWriterTableType": "MyTableType",

"storedProcedureParameters": {

"identifier": { "value": "1", "type": "Int" },

"stringData": { "value": "str1" }

Example 3: Upsert data

JSON

"activities":[

"name": "CopyToAzureSQLDatabase",

"type": "Copy",
"inputs": [

"referenceName": "<input dataset name>",

"type": "DatasetReference"

],

"outputs": [

"referenceName": "<Azure SQL Database output dataset name>",

"type": "DatasetReference"

],

"typeProperties": {

"source": {

"type": "<source type>"

},

"sink": {

"type": "AzureSqlSink",

"tableOption": "autoCreate",

"writeBehavior": "upsert",

"upsertSettings": {

"useTempDB": true,

"keys": [

"<column name>"

},

Parallel copy from SQL database


The Azure SQL Database connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning
options on the Source tab of the copy activity.

When you enable partitioned copy, copy activity runs parallel queries against your Azure SQL Database source to load data by partitions.
The parallel degree is controlled by the parallelCopies setting on the copy activity. For example, if you set parallelCopies to four, the
service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a
portion of data from your Azure SQL Database.

You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Azure SQL
Database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's
recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a
single file.

Scenario Suggested settings

Full load from large table, with physical partitions. Partition option: Physical partitions of table.

During execution, the service automatically detects the physical partitions, and copies data by
partitions.

To check if your table has physical partition or not, you can refer to this query.

Full load from large table, without physical partitions, Partition options: Dynamic range partition.

while with an integer or datetime column for data Partition column (optional): Specify the column used to partition data. If not specified, the index
partitioning. or primary key column is used.

Partition upper bound and partition lower bound (optional): Specify if you want to determine the
partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned
and copied. If not specified, copy activity auto detect the values.

For example, if your partition column "ID" has values range from 1 to 100, and you set the lower
bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4
partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively.
Scenario Suggested settings

Load a large amount of data by using a custom query, Partition options: Dynamic range partition.

without physical partitions, while with an integer or Query: SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND
date/datetime column for data partitioning. <your_additional_where_clause> .

Partition column: Specify the column used to partition data.

Partition upper bound and partition lower bound (optional): Specify if you want to determine the
partition stride. This is not for filtering the rows in table, all rows in the query result will be
partitioned and copied. If not specified, copy activity auto detect the value.

During execution, the service replaces ?AdfRangePartitionColumnName with the actual column name
and value ranges for each partition, and sends to Azure SQL Database.

For example, if your partition column "ID" has values range from 1 to 100, and you set the lower
bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4
partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively.

Here are more sample queries for different scenarios:

1. Query the whole table:

SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition

2. Query from a table with column selection and additional where-clause filters:

SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND


<your_additional_where_clause>

3. Query with subqueries:

SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition


AND <your_additional_where_clause>

4. Query with partition in subquery:

SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?


AdfDynamicRangePartitionCondition) AS T

Best practices to load data with partition option:

1. Choose distinctive column as partition column (like primary key or unique key) to avoid data skew.
2. If the table has built-in partition, use partition option "Physical partitions of table" to get better performance.
3. If you use Azure Integration Runtime to copy data, you can set larger "Data Integration Units (DIU)" (>4) to utilize more computing
resource. Check the applicable scenarios there.
4. "Degree of copy parallelism" control the partition numbers, setting this number too large sometime hurts the performance,
recommend setting this number as (DIU or number of Self-hosted IR nodes) * (2 to 4).

Example: full load from large table with physical partitions

JSON

"source": {

"type": "AzureSqlSource",

"partitionOption": "PhysicalPartitionsOfTable"

Example: query with dynamic range partition

JSON

"source": {

"type": "AzureSqlSource",

"query": "SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>",

"partitionOption": "DynamicRange",

"partitionSettings": {

"partitionColumnName": "<partition_column_name>",

"partitionUpperBound": "<upper_value_of_partition_column (optional) to decide the partition stride, not as data


filter>",

"partitionLowerBound": "<lower_value_of_partition_column (optional) to decide the partition stride, not as data


filter>"

Sample query to check physical partition


SQL

SELECT DISTINCT s.name AS SchemaName, t.name AS TableName, pf.name AS PartitionFunctionName, c.name AS ColumnName,
iif(pf.name is null, 'no', 'yes') AS HasPartition

FROM sys.tables AS t

LEFT JOIN sys.objects AS o ON t.object_id = o.object_id

LEFT JOIN sys.schemas AS s ON o.schema_id = s.schema_id

LEFT JOIN sys.indexes AS i ON t.object_id = i.object_id

LEFT JOIN sys.index_columns AS ic ON ic.partition_ordinal > 0 AND ic.index_id = i.index_id AND ic.object_id = t.object_id

LEFT JOIN sys.columns AS c ON c.object_id = ic.object_id AND c.column_id = ic.column_id

LEFT JOIN sys.partition_schemes ps ON i.data_space_id = ps.data_space_id

LEFT JOIN sys.partition_functions pf ON pf.function_id = ps.function_id

WHERE s.name='[your schema]' AND t.name = '[your table name]'

If the table has physical partition, you would see "HasPartition" as "yes" like the following.

Best practice for loading data into Azure SQL Database


When you copy data into Azure SQL Database, you might require different write behavior:

Append: My source data has only new records.


Upsert: My source data has both inserts and updates.
Overwrite: I want to reload an entire dimension table each time.
Write with custom logic: I need extra processing before the final insertion into the destination table.

Refer to the respective sections about how to configure in the service and best practices.

Append data
Appending data is the default behavior of this Azure SQL Database sink connector. the service does a bulk insert to write to your table
efficiently. You can configure the source and sink accordingly in the copy activity.

Upsert data
Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and
otherwise insert new data. To learn more about upsert settings in copy activities, see Azure SQL Database as the sink.

Overwrite the entire table


You can configure the preCopyScript property in the copy activity sink. In this case, for each copy activity that runs, the service runs the
script first. Then it runs the copy to insert the data. For example, to overwrite the entire table with the latest data, specify a script to first
delete all the records before you bulk load the new data from the source.

Write data with custom logic


The steps to write data with custom logic are similar to those described in the Upsert data section. When you need to apply extra
processing before the final insertion of source data into the destination table, you can load to a staging table then invoke stored procedure
activity, or invoke a stored procedure in copy activity sink to apply data, or use Mapping Data Flow.

Invoke a stored procedure from a SQL sink


When you copy data into Azure SQL Database, you also can configure and invoke a user-specified stored procedure with additional
parameters on each batch of the source table. The stored procedure feature takes advantage of table-valued parameters.

You can use a stored procedure when built-in copy mechanisms don't serve the purpose. An example is when you want to apply extra
processing before the final insertion of source data into the destination table. Some extra processing examples are when you want to
merge columns, look up additional values, and insert into more than one table.

The following sample shows how to use a stored procedure to do an upsert into a table in Azure SQL Database. Assume that the input data
and the sink Marketing table each have three columns: ProfileID, State, and Category. Do the upsert based on the ProfileID column, and
only apply it for a specific category called "ProductA".

1. In your database, define the table type with the same name as sqlWriterTableType. The schema of the table type is the same as the
schema returned by your input data.
SQL

CREATE TYPE [dbo].[MarketingType] AS TABLE(

[ProfileID] [varchar](256) NOT NULL,

[State] [varchar](256) NOT NULL,

[Category] [varchar](256) NOT NULL

2. In your database, define the stored procedure with the same name as sqlWriterStoredProcedureName. It handles input data from
your specified source and merges into the output table. The parameter name of the table type in the stored procedure is the same as
tableName defined in the dataset.

SQL

CREATE PROCEDURE spOverwriteMarketing @Marketing [dbo].[MarketingType] READONLY, @category varchar(256)

AS

BEGIN

MERGE [dbo].[Marketing] AS target

USING @Marketing AS source

ON (target.ProfileID = source.ProfileID and target.Category = @category)

WHEN MATCHED THEN

UPDATE SET State = source.State

WHEN NOT MATCHED THEN

INSERT (ProfileID, State, Category)

VALUES (source.ProfileID, source.State, source.Category);

END

3. In your Azure Data Factory or Synapse pipeline, define the SQL sink section in the copy activity as follows:

JSON

"sink": {

"type": "AzureSqlSink",

"sqlWriterStoredProcedureName": "spOverwriteMarketing",

"storedProcedureTableTypeParameterName": "Marketing",

"sqlWriterTableType": "MarketingType",

"storedProcedureParameters": {

"category": {

"value": "ProductA"

When writing data to into Azure SQL Database using stored procedure, the sink splits the source data into mini batches then do the insert,
so the extra query in stored procedure can be executed multiple times. If you have the query for the copy activity to run before writing data
into Azure SQL Database, it's not recommended to add it to the stored procedure, add it in the Pre-copy script box.

Mapping data flow properties


When transforming data in mapping data flow, you can read and write to tables from Azure SQL Database. For more information, see the
source transformation and sink transformation in mapping data flows.

Source transformation
Settings specific to Azure SQL Database are available in the Source Options tab of the source transformation.

Input: Select whether you point your source at a table (equivalent of Select * from <table-name> ) or enter a custom SQL query.

Query: If you select Query in the input field, enter a SQL query for your source. This setting overrides any table that you've chosen in the
dataset. Order By clauses aren't supported here, but you can set a full SELECT FROM statement. You can also use user-defined table
functions. select * from udfGetData() is a UDF in SQL that returns a table. This query will produce a source table that you can use in your
data flow. Using queries is also a great way to reduce rows for testing or for lookups.

 Tip

The common table expression (CTE) in SQL is not supported in the mapping data flow Query mode, because the prerequisite of using
this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this.
To use CTEs, you need to create a stored
procedure using the following query:
SQL

CREATE PROC CTESP @query nvarchar(max)

AS

BEGIN

EXECUTE sp_executesql @query;

END

Then use the Stored procedure mode in the source transformation of the mapping data flow and set the @query like example with
CTE as (select 'test' as a) select * from CTE . Then you can use CTEs as expected.

Stored procedure: Choose this option if you wish to generate a projection and source data from a stored procedure that is executed from
your source database. You can type in the schema, procedure name, and parameters, or click on Refresh to ask the service to discover the
schemas and procedure names. Then you can click on Import to import all procedure parameters using the form @paraName .

SQL Example: Select * from MyTable where customerId > 1000 and customerId < 2000
Parameterized SQL Example: "select * from {$tablename} where orderyear > {$year}"

Batch size: Enter a batch size to chunk large data into reads.

Isolation Level: The default for SQL sources in mapping data flow is read uncommitted. You can change the isolation level here to one of
these values:

Read Committed
Read Uncommitted
Repeatable Read
Serializable
None (ignore isolation level)
Enable incremental extract: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline
executed.

Incremental column: When using the incremental extract feature, you must choose the date/time or numeric column that you wish to use
as the watermark in your source table.

Enable native change data capture(Preview): Use this option to tell ADF to only process delta data captured by SQL change data capture
technology since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be
loaded automatically without any incremental column required. You need to enable change data capture on Azure SQL DB before using this
option in ADF. For more information about this option in ADF, see native change data capture.

Start reading from beginning: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline
with incremental extract turned on.

Sink transformation
Settings specific to Azure SQL Database are available in the Settings tab of the sink transformation.

Update method: Determines what operations are allowed on your database destination. The default is to only allow inserts. To update,
upsert, or delete rows, an alter-row transformation is required to tag rows for those actions. For updates, upserts and deletes, a key column
or columns must be set to determine which row to alter.

The column name that you pick as the key here will be used by the service as part of the subsequent update, upsert, delete. Therefore, you
must pick a column that exists in the Sink mapping. If you wish to not write the value to this key column, then click "Skip writing key
columns".

You can parameterize the key column used here for updating your target Azure SQL Database table. If you have multiple columns for a
composite key, the click on "Custom Expression" and you will be able to add dynamic content using the data flow expression language,
which can include an array of strings with column names for a composite key.

Table action: Determines whether to recreate or remove all rows from the destination table prior to writing.

None: No action will be done to the table.


Recreate: The table will get dropped and recreated. Required if creating a new table dynamically.
Truncate: All rows from the target table will get removed.

Batch size: Controls how many rows are being written in each bucket. Larger batch sizes improve compression and memory optimization,
but risk out of memory exceptions when caching data.

Use TempDB: By default, the service will use a global temporary table to store data as part of the loading process. You can alternatively
uncheck the "Use TempDB" option and instead, ask the service to store the temporary holding table in a user database that is located in the
database that is being used for this Sink.

Pre and Post SQL scripts: Enter multi-line SQL scripts that will execute before (pre-processing) and after (post-processing) data is written to
your Sink database
 Tip

1. It's recommended to break single batch scripts with multiple commands into multiple batches.
2. Only Data Definition Language (DDL) and Data Manipulation Language (DML) statements that return a simple update count can
be run as part of a batch. Learn more from Performing batch operations

Error row handling


When writing to Azure SQL DB, certain rows of data may fail due to constraints set by the destination. Some common errors include:

String or binary data would be truncated in table


Cannot insert the value NULL into column
The INSERT statement conflicted with the CHECK constraint

By default, a data flow run will fail on the first error it gets. You can choose to Continue on error that allows your data flow to complete
even if individual rows have errors. The service provides different options for you to handle these error rows.

Transaction Commit: Choose whether your data gets written in a single transaction or in batches. Single transaction will provide worse
performance but no data written will be visible to others until the transaction completes.

Output rejected data: If enabled, you can output the error rows into a csv file in Azure Blob Storage or an Azure Data Lake Storage Gen2
account of your choosing. This will write the error rows with three additional columns: the SQL operation like INSERT or UPDATE, the data
flow error code, and the error message on the row.

Report success on error: If enabled, the data flow will be marked as a success even if error rows are found.
Data type mapping for Azure SQL Database
When data is copied from or to Azure SQL Database, the following mappings are used from Azure SQL Database data types to Azure Data
Factory interim data types. The same mappings are used by the Synapse pipeline feature, which implements Azure Data Factory directly. To
learn how the copy activity maps the source schema and data type to the sink, see Schema and data type mappings.

Azure SQL Database data type Data Factory interim data type

bigint Int64

binary Byte[]

bit Boolean

char String, Char[]

date DateTime

Datetime DateTime

datetime2 DateTime

Datetimeoffset DateTimeOffset

Decimal Decimal

FILESTREAM attribute (varbinary(max)) Byte[]

Float Double

image Byte[]

int Int32

money Decimal

nchar String, Char[]

ntext String, Char[]

numeric Decimal
Azure SQL Database data type Data Factory interim data type

nvarchar String, Char[]

real Single

rowversion Byte[]

smalldatetime DateTime

smallint Int16

smallmoney Decimal

sql_variant Object

text String, Char[]

time TimeSpan

timestamp Byte[]

tinyint Byte

uniqueidentifier Guid

varbinary Byte[]

varchar String, Char[]

xml String

7 Note

For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data with
precision larger than 28, consider converting to a string in SQL query.

Lookup activity properties


To learn details about the properties, check Lookup activity.

GetMetadata activity properties


To learn details about the properties, check GetMetadata activity

Using Always Encrypted


When you copy data from/to Azure SQL Database with Always Encrypted, follow below steps:

1. Store the Column Master Key (CMK) in an Azure Key Vault. Learn more on how to configure Always Encrypted by using Azure Key
Vault

2. Make sure to get access to the key vault where the Column Master Key (CMK) is stored. Refer to this article for required permissions.

3. Create linked service to connect to your SQL database and enable 'Always Encrypted' function by using either managed identity or
service principal.

7 Note

Azure SQL Database Always Encrypted supports below scenarios:

1. Either source or sink data stores is using managed identity or service principal as key provider authentication type.
2. Both source and sink data stores are using managed identity as key provider authentication type.
3. Both source and sink data stores are using the same service principal as key provider authentication type.

7 Note
Currently, Azure SQL Database Always Encrypted is only supported for source transformation in mapping data flows.

Native change data capture


Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data
including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no
code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as
destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL
stores.

Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data
from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start
from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the
checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.

When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser
during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline.
At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now
on.

In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured
from the previous checkpoint of your selected pipeline run.

Example 1:
When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a
mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get
data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow
update or allow delete on target database. The example script in mapping dataflow is as below.

JSON

source(output(

id as integer,

name as string

),

allowSchemaDrift: true,

validateSchema: false,

enableNativeCdc: true,

netChanges: true,

skipInitialLoad: false,

isolationLevel: 'READ_UNCOMMITTED',

format: 'table') ~> source1

source1 sink(allowSchemaDrift: true,

validateSchema: false,

deletable:true,

insertable:true,

updateable:true,

upsertable:true,

keys:['id'],

format: 'table',

skipDuplicateMapInputs: true,

skipDuplicateMapOutputs: true,

errorHandlingOption: 'stopOnFirstError') ~> sink1

Example 2:
If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow
including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example
scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to
indicate deleted rows for downstream transforms to process the delta data.

JSON

source(output(

id as integer,

name as string

),

allowSchemaDrift: true,

validateSchema: false,

enableNativeCdc: true,

netChanges: true,

skipInitialLoad: false,

isolationLevel: 'READ_UNCOMMITTED',

format: 'table') ~> source1

source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1

derivedColumn1 sink(allowSchemaDrift: true,

validateSchema: false,

skipDuplicateMapInputs: true,

skipDuplicateMapOutputs: true) ~> sink1

Known limitation:
Only net changes from SQL CDC will be loaded by ADF via cdc.fn_cdc_get_net_changes_.

Next steps
For a list of data stores supported as sources and sinks by the copy activity, see Supported data stores and formats.
Tutorial: Deploy an ASP.NET app to
Azure with Azure SQL Database
Article • 09/21/2022

Azure App Service provides a highly scalable, self-patching web hosting service. This
tutorial shows you how to deploy a data-driven ASP.NET app in App Service and
connect it to Azure SQL Database. When you're finished, you have an ASP.NET app
running in Azure and connected to SQL Database.

In this tutorial, you learn how to:

" Create a database in Azure SQL Database


" Connect an ASP.NET app to SQL Database
" Deploy the app to Azure
" Update the data model and redeploy the app
" Stream logs from Azure to your terminal

If you don't have an Azure subscription, create an Azure free account before you
begin.

Prerequisites
To complete this tutorial:

Install Visual Studio 2022 with the ASP.NET and web development and Azure
development workloads.

If you've installed Visual Studio already, add the workloads in Visual Studio by clicking
Tools > Get Tools and Features.

Download the sample


1. Download the sample project .

2. Extract (unzip) the dotnet-sqldb-tutorial-master.zip file.

The sample project contains a basic ASP.NET MVC create-read-update-delete (CRUD)


app using Entity Framework Code First.

Run the app


1. Open the dotnet-sqldb-tutorial-master/DotNetAppSqlDb.sln file in Visual Studio.

2. Type F5 to run the app. The app is displayed in your default browser.

7 Note

If you only installed Visual Studio and the prerequisites, you may have to
install missing packages via NuGet.

3. Select the Create New link and create a couple to-do items.
4. Test the Edit, Details, and Delete links.

The app uses a database context to connect with the database. In this sample, the
database context uses a connection string named MyDbConnection . The connection
string is set in the Web.config file and referenced in the Models/MyDatabaseContext.cs
file. The connection string name is used later in the tutorial to connect the Azure app to
an Azure SQL Database.

Publish ASP.NET application to Azure


1. In the Solution Explorer, right-click your DotNetAppSqlDb project and select
Publish.
2. Select Azure as your target and click Next.

3. Make sure that Azure App Service (Windows) is selected and click Next.

Sign in and add an app


1. In the Publish dialog, click Sign In.

2. Sign in to your Azure subscription. If you're already signed into a Microsoft


account, make sure that account holds your Azure subscription. If the signed-in
Microsoft account doesn't have your Azure subscription, click it to add the correct
account.

3. In the App Service instances pane, click +.


Configure the web app name
You can keep the generated web app name, or change it to another unique name (valid
characters are a-z , 0-9 , and - ). The web app name is used as part of the default URL
for your app ( <app_name>.azurewebsites.net , where <app_name> is your web app name).
The web app name needs to be unique across all apps in Azure.

7 Note

Don't select Create yet.

Create a resource group


A resource group is a logical container into which Azure resources, such as web apps,
databases, and storage accounts, are deployed and managed. For example, you can
choose to delete the entire resource group in one simple step later.

1. Next to Resource Group, click New.


2. Name the resource group myResourceGroup.

Create an App Service plan

An App Service plan specifies the location, size, and features of the web server farm that
hosts your app. You can save money when you host multiple apps by configuring the
web apps to share a single App Service plan.

App Service plans define:

Region (for example: North Europe, East US, or Southeast Asia)


Instance size (small, medium, or large)
Scale count (1 to 20 instances)
SKU (Free, Shared, Basic, Standard, or Premium)

1. Next to Hosting Plan, click New.

2. In the Configure App Service Plan dialog, configure the new App Service plan with
the following settings and click OK:

Setting Suggested value For more information


Setting Suggested value For more information

App Service Plan myAppServicePlan App Service plans

Location West Europe Azure regions

Size Free Pricing tiers

3. Click Create and wait for the Azure resources to be created.

4. The Publish dialog shows the resources you've configured. Click Finish.
Create a server and database
Before creating a database, you need a logical SQL server. A logical SQL server is a
logical construct that contains a group of databases managed as a group.

1. In the Publish dialog, scroll down to the Service Dependencies section. Next to
SQL Server Database, click Configure.

7 Note

Be sure to configure the SQL Database from the Publish page instead of the
Connected Services page.

2. Select Azure SQL Database and click Next.


3. In the Configure Azure SQL Database dialog, click +.

4. Next to Database server, click New.

The server name is used as part of the default URL for your server,
<server_name>.database.windows.net . It must be unique across all servers in Azure
SQL. Change the server name to a value you want.

5. Add an administrator username and password. For password complexity


requirements, see Password Policy.

Remember this username and password. You need them to manage the server
later.

) Important

Even though your password in the connection strings is masked (in Visual
Studio and also in App Service), the fact that it's maintained somewhere adds
to the attack surface of your app. App Service can use managed service
identities to eliminate this risk by removing the need to maintain secrets in
your code or app configuration at all. For more information, see Next steps.
6. Click OK.

7. In the Azure SQL Database dialog, keep the default generated Database Name.
Select Create and wait for the database resources to be created.

Configure database connection


1. When the wizard finishes creating the database resources, click Next.

2. In the Database connection string Name, type MyDbConnection. This name must
match the connection string that is referenced in Models/MyDatabaseContext.cs.

3. In Database connection user name and Database connection password, type the
administrator username and password you used in Create a server.

4. Make sure Azure App Settings is selected and click Finish.

7 Note

If you see Local user secrets files instead, you must have configured SQL
Database from the Connected Services page instead of the Publish page.
5. Wait for configuration wizard to finish and click Close.

Deploy your ASP.NET app

1. In the Publish tab, scroll back up to the top and click Publish. Once your ASP.NET
app is deployed to Azure. Your default browser is launched with the URL to the
deployed app.

2. Add a few to-do items.


Congratulations! Your data-driven ASP.NET application is running live in Azure App
Service.

Access the database locally


Visual Studio lets you explore and manage your new database in Azure easily in the SQL
Server Object Explorer. The new database already opened its firewall to the App Service
app that you created. But to access it from your local computer (such as from Visual
Studio), you must open a firewall for your local machine's public IP address. If your
internet service provider changes your public IP address, you need to reconfigure the
firewall to access the Azure database again.

Create a database connection

1. From the View menu, select SQL Server Object Explorer.

2. At the top of SQL Server Object Explorer, click the Add SQL Server button.

Configure the database connection


1. In the Connect dialog, expand the Azure node. All your SQL Database instances in
Azure are listed here.

2. Select the database that you created earlier. The connection you created earlier is
automatically filled at the bottom.

3. Type the database administrator password you created earlier and click Connect.
Allow client connection from your computer
The Create a new firewall rule dialog is opened. By default, a server only allows
connections to its databases from Azure services, such as your Azure app. To connect to
your database from outside of Azure, create a firewall rule at the server level. The
firewall rule allows the public IP address of your local computer.

The dialog is already filled with your computer's public IP address.

1. Make sure that Add my client IP is selected and click OK.


Once Visual Studio finishes creating the firewall setting for your SQL Database
instance, your connection shows up in SQL Server Object Explorer.

Here, you can perform the most common database operations, such as run
queries, create views and stored procedures, and more.

2. Expand your connection > Databases > <your database> > Tables. Right-click on
the Todoes table and select View Data.
Update app with Code First Migrations
You can use the familiar tools in Visual Studio to update your database and app in
Azure. In this step, you use Code First Migrations in Entity Framework to make a change
to your database schema and publish it to Azure.

For more information about using Entity Framework Code First Migrations, see Getting
Started with Entity Framework 6 Code First using MVC 5.

Update your data model

Open Models\Todo.cs in the code editor. Add the following property to the ToDo class:

C#

public bool Done { get; set; }

Run Code First Migrations locally


Run a few commands to make updates to your local database.

1. From the Tools menu, click NuGet Package Manager > Package Manager
Console.

2. In the Package Manager Console window, enable Code First Migrations:

PowerShell

Enable-Migrations

3. Add a migration:

PowerShell

Add-Migration AddProperty

4. Update the local database:

PowerShell

Update-Database

5. Type Ctrl+F5 to run the app. Test the edit, details, and create links.

If the application loads without errors, then Code First Migrations has succeeded.
However, your page still looks the same because your application logic is not using this
new property yet.

Use the new property

Make some changes in your code to use the Done property. For simplicity in this tutorial,
you're only going to change the Index and Create views to see the property in action.

1. Open Controllers\TodosController.cs.

2. Find the Create() method on line 52 and add Done to the list of properties in the
Bind attribute. When you're done, your Create() method signature looks like the

following code:

C#
public ActionResult Create([Bind(Include =
"Description,CreatedDate,Done")] Todo todo)

3. Open Views\Todos\Create.cshtml.

4. In the Razor code, you should see a <div class="form-group"> element that uses
model.Description , and then another <div class="form-group"> element that uses

model.CreatedDate . Immediately following these two elements, add another <div


class="form-group"> element that uses model.Done :

C#

<div class="form-group">

@Html.LabelFor(model => model.Done, htmlAttributes: new { @class =


"control-label col-md-2" })

<div class="col-md-10">

<div class="checkbox">

@Html.EditorFor(model => model.Done)

@Html.ValidationMessageFor(model => model.Done, "", new {


@class = "text-danger" })

</div>

</div>

</div>

5. Open Views\Todos\Index.cshtml.

6. Search for the empty <th></th> element. Just above this element, add the
following Razor code:

C#

<th>

@Html.DisplayNameFor(model => model.Done)

</th>

7. Find the <td> element that contains the Html.ActionLink() helper methods. Above
this <td> , add another <td> element with the following Razor code:

C#

<td>

@Html.DisplayFor(modelItem => item.Done)

</td>

That's all you need to see the changes in the Index and Create views.
8. Type Ctrl+F5 to run the app.

You can now add a to-do item and check Done. Then it should show up in your
homepage as a completed item. Remember that the Edit view doesn't show the Done
field, because you didn't change the Edit view.

Enable Code First Migrations in Azure

Now that your code change works, including database migration, you publish it to your
Azure app and update your SQL Database with Code First Migrations too.

1. Just like before, right-click your project and select Publish.

2. Click More actions > Edit to open the publish settings.

3. In the MyDatabaseContext dropdown, select the database connection for your


Azure SQL Database.

4. Select Execute Code First Migrations (runs on application start), then click Save.
Publish your changes

Now that you enabled Code First Migrations in your Azure app, publish your code
changes.

1. In the publish page, click Publish.

2. Try adding to-do items again and select Done, and they should show up in your
homepage as a completed item.
All your existing to-do items are still displayed. When you republish your ASP.NET
application, existing data in your SQL Database is not lost. Also, Code First Migrations
only changes the data schema and leaves your existing data intact.

Stream application logs


You can stream tracing messages directly from your Azure app to Visual Studio.

Open Controllers\TodosController.cs.

Each action starts with a Trace.WriteLine() method. This code is added to show you
how to add trace messages to your Azure app.

Enable log streaming


1. In the publish page, scroll down to the Hosting section.

2. At the right-hand corner, click ... > View Streaming Logs.


The logs are now streamed into the Output window.

However, you don't see any of the trace messages yet. That's because when you
first select View Streaming Logs, your Azure app sets the trace level to Error ,
which only logs error events (with the Trace.TraceError() method).

Change trace levels

1. To change the trace levels to output other trace messages, go back to the publish
page.

2. In the Hosting section, click ... > Open in Azure portal.

3. In the portal management page for your app, from the left menu, select App
Service logs.

4. Under Application Logging (File System), select Verbose in Level. Click Save.

 Tip

You can experiment with different trace levels to see what types of messages
are displayed for each level. For example, the Information level includes all
logs created by Trace.TraceInformation() , Trace.TraceWarning() , and
Trace.TraceError() , but not logs created by Trace.WriteLine() .

5. In your browser navigate to your app again at http://<your app


name>.azurewebsites.net, then try clicking around the to-do list application in
Azure. The trace messages are now streamed to the Output window in Visual
Studio.

Console

Application: 2017-04-06T23:30:41 PID[8132] Verbose GET


/Todos/Index

Application: 2017-04-06T23:30:43 PID[8132] Verbose GET


/Todos/Create

Application: 2017-04-06T23:30:53 PID[8132] Verbose POST


/Todos/Create

Application: 2017-04-06T23:30:54 PID[8132] Verbose GET


/Todos/Index

Stop log streaming

To stop the log-streaming service, click the Stop monitoring button in the Output
window.

Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't
expect to need these resources in the future, you can delete them by deleting the
resource group.

1. From your web app's Overview page in the Azure portal, select the
myResourceGroup link under Resource group.
2. On the resource group page, make sure that the listed resources are the ones you
want to delete.
3. Select Delete, type myResourceGroup in the text box, and then select Delete.

Next steps
In this tutorial, you learned how to:

" Create a database in Azure SQL Database


" Connect an ASP.NET app to SQL Database
" Deploy the app to Azure
" Update the data model and redeploy the app
" Stream logs from Azure to your terminal

Advance to the next tutorial to learn how to easily improve the security of your
connection Azure SQL Database.

Tutorial: Connect to SQL Database from App Service without secrets using a
managed identity

More resources:

Configure ASP.NET app

Want to optimize and save on your cloud spending?

Start analyzing costs with Cost Management


Use Azure Functions to connect to an
Azure SQL Database
Article • 01/30/2023

This article shows you how to use Azure Functions to create a scheduled job that
connects to an Azure SQL Database or Azure SQL Managed Instance. The function code
cleans up rows in a table in the database. The new C# function is created based on a
pre-defined timer trigger template in Visual Studio 2019. To support this scenario, you
must also set a database connection string as an app setting in the function app. For
Azure SQL Managed Instance you need to enable public endpoint to be able to connect
from Azure Functions. This scenario uses a bulk operation against the database.

If this is your first experience working with C# Functions, you should read the Azure
Functions C# developer reference.

Prerequisites
Complete the steps in the article Create your first function using Visual Studio to
create a local function app that targets version 2.x or a later version of the runtime.
You must also have published your project to a function app in Azure.

This article demonstrates a Transact-SQL command that executes a bulk cleanup


operation in the SalesOrderHeader table in the AdventureWorksLT sample
database. To create the AdventureWorksLT sample database, complete the steps in
the article Create a database in Azure SQL Database using the Azure portal.

You must add a server-level firewall rule for the public IP address of the computer
you use for this quickstart. This rule is required to be able access the SQL Database
instance from your local computer.

Get connection information


You need to get the connection string for the database you created when you
completed Create a database in Azure SQL Database using the Azure portal.

1. Sign in to the Azure portal .

2. Select SQL Databases from the left-hand menu, and select your database on the
SQL databases page.
3. Select Connection strings under Settings and copy the complete ADO.NET
connection string. For Azure SQL Managed Instance copy connection string for
public endpoint.

Set the connection string


A function app hosts the execution of your functions in Azure. As a best security
practice, store connection strings and other secrets in your function app settings. Using
application settings prevents accidental disclosure of the connection string with your
code. You can access app settings for your function app right from Visual Studio.

You must have previously published your app to Azure. If you haven't already done so,
Publish your function app to Azure.

1. In Solution Explorer, right-click the function app project and choose Publish.

2. On the Publish page, select the ellipses ( ... ) in the Hosting area, and choose
Manage Azure App Service settings.
3. In Application Settings select Add setting, in New app setting name type
sqldb_connection , and select OK.
4. In the new sqldb_connection setting, paste the connection string you copied in
the previous section into the Local field and replace {your_username} and
{your_password} placeholders with real values. Select Insert value from local to

copy the updated value into the Remote field, and then select OK.

The connection strings are stored encrypted in Azure (Remote). To prevent leaking
secrets, the local.settings.json project file (Local) should be excluded from source
control, such as by using a .gitignore file.

Add the SqlClient package to the project


You need to add the NuGet package that contains the SqlClient library. This data access
library is needed to connect to SQL Database.

1. Open your local function app project in Visual Studio 2022.

2. In Solution Explorer, right-click the function app project and choose Manage
NuGet Packages.

3. On the Browse tab, search for Microsoft.Data.SqlClient and, when found, select
it.

4. In the Microsoft.Data.SqlClient page, select version 5.1.0 and then click Install.
5. When the install completes, review the changes and then click OK to close the
Preview window.

6. If a License Acceptance window appears, click I Accept.

Now, you can add the C# function code that connects to your SQL Database.

Add a timer triggered function


1. In Solution Explorer, right-click the function app project and choose Add > New
Azure function.

2. With the Azure Functions template selected, name the new item something like
DatabaseCleanup.cs and select Add.

3. In the New Azure function dialog box, choose Timer trigger and then Add. This
dialog creates a code file for the timer triggered function.

4. Open the new code file and add the following using statements at the top of the
file:

C#

using Microsoft.Data.SqlClient;

using System.Threading.Tasks;

5. Replace the existing Run function with the following code:

C#

[FunctionName("DatabaseCleanup")]

public static async Task Run([TimerTrigger("*/15 * * * * *")]TimerInfo


myTimer, ILogger log)

// Get the connection string from app settings and use it to create
a connection.

var str = Environment.GetEnvironmentVariable("sqldb_connection");

using (SqlConnection conn = new SqlConnection(str))

conn.Open();

var text = "UPDATE SalesLT.SalesOrderHeader " +

"SET [Status] = 5 WHERE ShipDate < GetDate();";

using (SqlCommand cmd = new SqlCommand(text, conn))

// Execute the command and log the # rows affected.

var rows = await cmd.ExecuteNonQueryAsync();

log.LogInformation($"{rows} rows were updated");

This function runs every 15 seconds to update the Status column based on the
ship date. To learn more about the Timer trigger, see Timer trigger for Azure
Functions.

6. Press F5 to start the function app. The Azure Functions Core Tools execution
window opens behind Visual Studio.

7. At 15 seconds after startup, the function runs. Watch the output and note the
number of rows updated in the SalesOrderHeader table.

On the first execution, you should update 32 rows of data. Following runs update
no data rows, unless you make changes to the SalesOrderHeader table data so that
more rows are selected by the UPDATE statement.

If you plan to publish this function, remember to change the TimerTrigger attribute to a
more reasonable cron schedule than every 15 seconds. You also need to make sure that
your function app can access the Azure SQL Database or Azure SQL Managed Instance.
For more information, see one of the following links based on your type of Azure SQL:

Azure SQL Database


Azure SQL Managed Instance

Next steps
Next, learn how to use. Functions with Logic Apps to integrate with other services.
Create a function that integrates with Logic Apps

For more information about Functions, see the following articles:

Azure Functions developer reference

Programmer reference for coding functions and defining triggers and bindings.
Testing Azure Functions

Describes various tools and techniques for testing your functions.


Connect to an SQL database from
workflows in Azure Logic Apps
Article • 06/27/2023

Applies to: Azure Logic Apps (Consumption + Standard)

This how-to guide shows how to access your SQL database from a workflow in Azure
Logic Apps with the SQL Server connector. You can then create automated workflows
that run when triggered by events in your SQL database or in other systems and run
actions to manage your SQL data and resources.

For example, your workflow can run actions that get, insert, and delete data or that can
run SQL queries and stored procedures. Your workflow can check for new records in a
non-SQL database, do some processing work, use the results to create new records in
your SQL database, and send email alerts about the new records.

If you're new to Azure Logic Apps, review the following get started documentation:

What is Azure Logic Apps

Create an example Consumption logic app workflow in multi-tenant Azure Logic


Apps

Create an example Standard logic app workflow in single-tenant Azure Logic Apps

Supported SQL editions


The SQL Server connector supports the following SQL editions:

SQL Server
Azure SQL Database
Azure SQL Managed Instance

Connector technical reference


The SQL Server connector has different versions, based on logic app type and host
environment.

Logic app Environment Connector version


Logic app Environment Connector version

Consumption Multi-tenant Azure Managed connector, which appears in the designer under
Logic Apps the Standard label. For more information, review the
following documentation:

- SQL Server managed connector reference

- Managed connectors in Azure Logic Apps

Consumption Integration service Managed connector, which appears in the designer under
environment (ISE) the Standard label, and the ISE version, which has
different message limits than the Standard class. For more
information, review the following documentation:

- SQL Server managed connector reference

- ISE message limits

- Managed connectors in Azure Logic Apps

Standard Single-tenant Azure Managed connector, which appears in the designer under
Logic Apps and App the Azure label, and built-in connector, which appears in
Service Environment the designer under the Built-in label and is service
v3 (Windows plans provider based. The built-in version differs in the following
only) ways:

- The built-in version can connect directly to an SQL


database and access Azure virtual networks. You don't
need an on-premises data gateway.

For more information, review the following


documentation:

- SQL Server managed connector reference

- SQL Server built-in connector reference

- Built-in connectors in Azure Logic Apps

Limitations
For more information, review the SQL Server managed connector reference or the SQL
Server built-in connector reference.

Prerequisites
An Azure account and subscription. If you don't have a subscription, sign up for a
free Azure account .

SQL Server database, Azure SQL Database, or SQL Managed Instance.


The SQL Server connector requires that your tables contain data so that the
connector operations can return results when called. For example, if you use Azure
SQL Database, you can use the included sample databases to try the SQL Server
connector operations.

The information required to create an SQL database connection, such as your SQL
server and database name. If you're using Windows Authentication or SQL Server
Authentication to authenticate access, you also need your user name and
password. You can usually find this information in the connection string.

) Important

If you use an SQL Server connection string that you copied directly from the
Azure portal,
you have to manually add your password to the connection
string.

For an SQL database in Azure, the connection string has the following format:

Server=tcp:{your-server-name}.database.windows.net,1433;Initial Catalog=
{your-database-name};Persist Security Info=False;User ID={your-user-

name};Password={your-

password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificat
e=False;Connection Timeout=30;

1. To find this string in the Azure portal , open your database.

2. On the database menu, under Properties, select Connection strings.

For an on-premises SQL server, the connection string has the following format:

Server={your-server-address};Database={your-database-name};User Id={your-

user-name};Password={your-password};

The logic app workflow where you want to access your SQL database. To start your
workflow with a SQL Server trigger, you have to start with a blank workflow. To use
a SQL Server action, start your workflow with any trigger.

To connect to an on-premises SQL server, the following extra requirements apply,


based on whether you have a Consumption or Standard logic app workflow.

Consumption workflow
In multi-tenant Azure Logic Apps, you need the on-premises data gateway
installed on a local computer and a data gateway resource that's already
created in Azure.

In an ISE, you don't need the on-premises data gateway for SQL Server
Authentication and non-Windows Authentication connections, and you can
use the ISE-versioned SQL Server connector. For Windows Authentication,
you need the on-premises data gateway on a local computer and a data
gateway resource that's already created in Azure. The ISE-version connector
doesn't support Windows Authentication, so you have to use the regular SQL
Server managed connector.

Standard workflow

You can use the SQL Server built-in connector or managed connector.

To use Azure Active Directory authentication or managed identity


authentication with your logic app, you have to set up your SQL Server to
work with these authentication types. For more information, see
Authentication - SQL Server managed connector reference.

To use the built-in connector, you can authenticate your connection with
either a managed identity, Azure Active Directory, or a connection string. You
can adjust connection pooling by specifying parameters in the connection
string. For more information, review Connection Pooling.

To use the SQL Server managed connector, follow the same requirements as
a Consumption logic app workflow in multi-tenant Azure Logic Apps. For
other connector requirements, review the SQL Server managed connector
reference.

Add a SQL Server trigger


The following steps use the Azure portal, but with the appropriate Azure Logic Apps
extension, you can also use the following tools to create logic app workflows:

Consumption workflows: Visual Studio or Visual Studio Code

Standard workflows: Visual Studio Code

Consumption
1. In the Azure portal , open your Consumption logic app and blank workflow
in the designer.

2. In the designer, under the search box, select Standard. Then, follow these
general steps to add the SQL Server managed trigger you want.

This example continues with the trigger named When an item is created.

3. If prompted, provide the information for your connection. When you're done,
select Create.

4. After the trigger information box appears, provide the necessary information
required by your selected trigger.

For this example, in the trigger named When an item is created, provide the
values for the SQL server name and database name, if you didn't previously
provide them. Otherwise, from the Table name list, select the table that you
want to use. Select the Frequency and Interval to set the schedule for the
trigger to check for new items.

5. If any other properties are available for this trigger, open the Add new
parameter list, and select those properties relevant to your scenario.

This trigger returns only one row from the selected table, and nothing else. To
perform other tasks, continue by adding either a SQL Server connector action
or another action that performs the next task that you want in your logic app
workflow.
For example, to view the data in this row, you can add other actions that
create a file that includes the fields from the returned row, and then send
email alerts. To learn about other available actions for this connector, see the
SQL Server managed connector reference.

6. When you're done, save your workflow. On the designer toolbar, select Save.

When you save your workflow, this step automatically publishes your updates to your
deployed logic app, which is live in Azure. With only a trigger, your workflow just checks
the SQL database based on your specified schedule. You have to add an action that
responds to the trigger.

Add a SQL Server action


The following steps use the Azure portal, but with the appropriate Azure Logic Apps
extension, you can also use Visual Studio to edit Consumption logic app workflows or
Visual Studio Code to the following tools to edit logic app workflows:

Consumption workflows: Visual Studio or Visual Studio Code

Standard workflows: Visual Studio Code

In this example, the logic app workflow starts with the Recurrence trigger, and calls an
action that gets a row from an SQL database.

Consumption

1. In the Azure portal , open your Consumption logic app and workflow in the
designer.

2. In the designer, follow these general steps to add the SQL Server managed
action you want.

This example continues with the action named Get row, which gets a single
record.

3. If prompted, provide the information for your connection. When you're done,
select Create.

4. After the action information box appears, from the Table name list, select the
table that you want to use. In the Row id property, enter the ID for the record
that you want.
For this example, the table name is SalesLT.Customer.

This action returns only one row from the selected table, and nothing else. To
view the data in this row, add other actions. For example, such actions might
create a file, include the fields from the returned row, and store the file in a
cloud storage account. To learn about other available actions for this
connector, see the connector's reference page.

5. When you're done, save your workflow. On the designer toolbar, select Save.

Connect to your database


When you add a trigger or action that connects to a service or system, and you don't
have an existing or active connection, Azure Logic Apps prompts you to provide the
connection information, which varies based on the connection type, for example:

Your account credentials


A name to use for the connection
The name for the server or system
The authentication type to use
A connection string

After you provide this information, continue with the following steps based on your
target database:

Connect to cloud-based Azure SQL Database or SQL Managed Instance


Connect to on-premises SQL Server

Connect to Azure SQL Database or SQL Managed


Instance
To access a SQL Managed Instance without using the on-premises data gateway or
integration service environment, you have to set up the public endpoint on the SQL
Managed Instance. The public endpoint uses port 3342, so make sure that you specify
this port number when you create the connection from your logic app.

In the connection information box, complete the following steps:

1. For Connection name, provide a name to use for your connection.

2. For Authentication type, select the authentication that's required and enabled on
your database in Azure SQL Database or SQL Managed Instance:

Authentication Description

Connection - Supported only in Standard workflows with the SQL Server built-in
string connector.

- Requires the connection string to your SQL server and database.

Active - Supported only in Standard workflows with the SQL Server built-in
Directory connector. For more information, see the following documentation:

OAuth
- Authentication for SQL Server connector

- Enable Azure Active Directory Open Authentication (Azure AD OAuth)

- Azure Active Directory Open Authentication

Logic Apps - Supported with the SQL Server managed connector and ISE-versioned
Managed connector. In Standard workflows, this authentication type is available for
Identity the SQL Server built-in connector, but the option is named Managed
identity instead.

- Requires the following items:

--- A valid managed identity that's enabled on your logic app resource
and has access to your database.

--- SQL DB Contributor role access to the SQL Server resource

--- Contributor access to the resource group that includes the SQL
Server resource.

For more information, see the following documentation:

- Managed identity authentication for SQL Server connector

- SQL - Server-Level Roles


Authentication Description

Service - Supported with the SQL Server managed connector.

principal
(Azure AD - Requires an Azure AD application and service principal. For more
application) information, see Create an Azure AD application and service principal
that can access resources using the Azure portal.

Azure AD - Supported with the SQL Server managed connector and ISE-versioned
Integrated connector.

- Requires a valid managed identity in Azure Active Directory (Azure AD)


that's enabled on your logic app resource and has access to your
database. For more information, see these topics:

- Azure SQL Security Overview - Authentication

- Authorize database access to Azure SQL - Authentication and


authorization

- Azure SQL - Azure AD Integrated authentication

SQL Server - Supported with the SQL Server managed connector and ISE-versioned
Authentication connector.

- Requires the following items:

--- A data gateway resource that's previously created in Azure for your
connection, regardless whether your logic app is in multi-tenant Azure
Logic Apps or an ISE.

--- A valid user name and strong password that are created and stored in
your SQL Server database. For more information, see the following
topics:

- Azure SQL Security Overview - Authentication

- Authorize database access to Azure SQL - Authentication and


authorization

The following examples show how the connection information box might appear if
you use the SQL Server managed connector and select Azure AD Integrated
authentication:

Consumption workflows
Standard workflows

3. After you select Azure AD Integrated, select Sign in. Based on whether you use
Azure SQL Database or SQL Managed Instance, select your user credentials for
authentication.

4. Select these values for your database:

Property Required Description

Server Yes The address for your SQL server, for example, Fabrikam-Azure-
name SQL.database.windows.net

Database Yes The name for your SQL database, for example, Fabrikam-Azure-
name SQL-DB

Table Yes The table that you want to use, for example, SalesLT.Customer
name

 Tip

To provide your database and table information, you have these options:

Find this information in your database's connection string. For example,


in the Azure portal, find and open your database. On the database
menu, select either Connection strings or Properties, where you can
find the following string:

Server=tcp:{your-server-address}.database.windows.net,1433;Initial

Catalog={your-database-name};Persist Security Info=False;User ID=

{your-user-name};Password={your-

password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCer

tificate=False;Connection Timeout=30;

By default, tables in system databases are filtered out, so they might not
automatically appear when you select a system database. As an
alternative, you can manually enter the table name after you select Enter
custom value from the database list.

This database information box looks similar to the following example:

Consumption workflows

Standard workflows
5. Now, continue with the steps that you haven't completed yet in either Add a SQL
trigger or Add a SQL action.

Connect to on-premises SQL Server


In the connection information box, complete the following steps:

1. For connections to your on-premises SQL server that require the on-premises data
gateway, make sure that you've completed these prerequisites.

Otherwise, your data gateway resource doesn't appear in the Connection Gateway
list when you create your connection.

2. For Authentication Type, select the authentication that's required and enabled on
your SQL Server:

Authentication Description
Authentication Description

SQL Server - Supported with the SQL Server managed connector, SQL Server built-in
Authentication connector, and ISE-versioned connector.

- Requires the following items:

--- A data gateway resource that's previously created in Azure for your
connection, regardless whether your logic app is in multi-tenant Azure
Logic Apps or an ISE.

--- A valid user name and strong password that are created and stored in
your SQL Server.

For more information, see SQL Server Authentication.

Windows - Supported with the SQL Server managed connector.

Authentication
- Requires the following items:

--- A data gateway resource that's previously created in Azure for your
connection, regardless whether your logic app is in multi-tenant Azure
Logic Apps or an ISE.

--- A valid Windows user name and password to confirm your identity
through your Windows account.

For more information, see Windows Authentication.

3. Select or provide the following values for your SQL database:

Property Required Description

SQL server Yes The address for your SQL server, for example,
name Fabrikam-Azure-SQL.database.windows.net

SQL Yes The name for your SQL Server database, for example,
database Fabrikam-Azure-SQL-DB
name

Username Yes Your user name for the SQL server and database

Password Yes Your password for the SQL server and database

Subscription Yes, for Windows The Azure subscription for the data gateway resource
authentication that you previously created in Azure
Property Required Description

Connection Yes, for Windows The name for the data gateway resource that you
Gateway authentication previously created in Azure

Tip: If your gateway doesn't appear in the list, check


that you correctly set up your gateway.

 Tip

You can find this information in your database's connection string:

Server={your-server-address}

Database={your-database-name}

User ID={your-user-name}

Password={your-password}

The following examples show how the connection information box might appear if
you select Windows authentication.

Consumption workflows

Standard workflows
4. When you're ready, select Create.

5. Now, continue with the steps that you haven't completed yet in either Add a SQL
trigger or Add a SQL action.

Handle bulk data


Sometimes, you work with result sets so large that the connector doesn't return all the
results at the same time. Or, you want better control over the size and structure for your
result sets. The following list includes some ways that you can handle such large result
sets:

To help you manage results as smaller sets, turn on pagination. For more
information, see Get bulk data, records, and items by using pagination. For more
information, see SQL Pagination for bulk data transfer with Logic Apps .

Create a stored procedure that organizes the results the way that you want. The
SQL Server connector provides many backend features that you can access by
using Azure Logic Apps so that you can more easily automate business tasks that
work with SQL database tables.

When a SQL action gets or inserts multiple rows, your logic app workflow can
iterate through these rows by using an until loop within these limits. However,
when your logic app has to work with record sets so large, for example, thousands
or millions of rows, that you want to minimize the costs resulting from calls to the
database.

To organize the results in the way that you want, you can create a stored
procedure that runs in your SQL instance and uses the SELECT - ORDER BY
statement. This solution gives you more control over the size and structure of your
results. Your logic app calls the stored procedure by using the SQL Server
connector's Execute stored procedure action. For more information, see SELECT -
ORDER BY Clause.

7 Note

The SQL Server connector has a stored procedure timeout limit that's less
than 2 minutes.
Some stored procedures might take longer than this limit to
complete, causing a 504 Timeout error. You can work around this problem
by
using a SQL completion trigger, native SQL pass-through query, a state table,
and server-side jobs.

For this task, you can use the Azure Elastic Job Agent
for Azure SQL
Database. For
SQL Server on premises
and SQL Managed Instance,
you can
use the SQL Server Agent. To learn more, see
Handle long-running stored
procedure timeouts in the SQL Server connector for Azure Logic Apps.

Handle dynamic bulk data


When you call a stored procedure by using the SQL Server connector, the returned
output is sometimes dynamic. In this scenario, follow these steps:

1. In the Azure portal , open your logic app and workflow in the designer.

2. View the output format by performing a test run. Copy and save your sample
output.

3. In the designer, under the action where you call the stored procedure, add the
built-in action named Parse JSON.

4. In the Parse JSON action, select Use sample payload to generate schema.

5. In the Enter or paste a sample JSON payload box, paste your sample output, and
select Done.

7 Note
If you get an error that Azure Logic Apps can't generate a schema, check that
your
sample output's syntax is correctly formatted. If you still can't generate
the schema,
in the Schema box, manually enter the schema.

6. When you're done, save your workflow.

7. To reference the JSON content properties, select inside the edit boxes where you
want to reference those properties so that the dynamic content list appears. In the
list, under the Parse JSON heading, select the data tokens for the JSON content
properties that you want.

Next steps
Managed connectors for Azure Logic Apps
Built-in connectors for Azure Logic Apps
Index data from Azure SQL
Article • 01/19/2023

In this article, learn how to configure an indexer that imports content from Azure SQL
Database or an Azure SQL managed instance and makes it searchable in Azure
Cognitive Search.

This article supplements Create an indexer with information that's specific to Azure SQL.
It uses the REST APIs to demonstrate a three-part workflow common to all indexers:
create a data source, create an index, create an indexer.

This article also provides:

A description of the change detection policies supported by the Azure SQL indexer
so that you can set up incremental indexing.

A frequently-asked-questions (FAQ) section for answers to questions about feature


compatibility.

7 Note

Real-time data synchronization isn't possible with an indexer. An indexer can


reindex your table at most every five minutes. If data updates need to be reflected
in the index sooner, we recommend pushing updated rows directly.

Prerequisites
An Azure SQL database with data in a single table or view.

Use a table if your data is large or if you need incremental indexing using SQL's
native change detection capabilities.

Use a view if you need to consolidate data from multiple tables. Large views aren't
ideal for SQL indexer. A workaround is to create a new table just for ingestion into
your Cognitive Search index. You'll be able to use SQL integrated change tracking,
which is easier to implement than High Water Mark.

Read permissions. Azure Cognitive Search supports SQL Server authentication,


where the user name and password are provided on the connection string.
Alternatively, you can set up a managed identity and use Azure roles.

To work through the examples in this article, you'll need a REST client, such as Postman.
Other approaches for creating an Azure SQL indexer include Azure SDKs or Import data
wizard in the Azure portal. If you're using Azure portal, make sure that access to all
public networks is enabled in the Azure SQL firewall and that the client has access via an
inbound rule.

Define the data source


The data source definition specifies the data to index, credentials, and policies for
identifying changes in the data. A data source is defined as an independent resource so
that it can be used by multiple indexers.

1. Create data source or Update data source to set its definition:

HTTP

POST https://myservice.search.windows.net/datasources?api-
version=2020-06-30

Content-Type: application/json

api-key: admin-key

"name" : "myazuresqldatasource",

"description" : "A database for testing Azure Cognitive Search


indexes.",

"type" : "azuresql",

"credentials" : { "connectionString" : "Server=tcp:<your


server>.database.windows.net,1433;Database=<your database>;User ID=
<your user name>;Password=<your
password>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
},

"container" : {

"name" : "name of the table or view that you want to index",

"query" : null (not supported in the Azure SQL indexer)

},

"dataChangeDetectionPolicy": null,

"dataDeletionDetectionPolicy": null,

"encryptionKey": null,

"identity": null

2. Provide a unique name for the data source that follows Azure Cognitive Search
naming conventions.

3. Set "type" to "azuresql" (required).

4. Set "credentials" to a connection string:


You can get a full access connection string from the Azure portal . Use the
ADO.NET connection string option. Set the user name and password.

Alternatively, you can specify a managed identity connection string that


doesn't include database secrets with the following format: Initial
Catalog|Database=<your database name>;ResourceId=/subscriptions/<your

subscription ID>/resourceGroups/<your resource group

name>/providers/Microsoft.Sql/servers/<your SQL Server name>/;Connection


Timeout=connection timeout length; .

For more information, see Connect to Azure SQL Database indexer using a
managed identity.

Add search fields to an index


In a search index, add fields that correspond to the fields in SQL database. Ensure that
the search index schema is compatible with source schema by using equivalent data
types.

1. Create or update an index to define search fields that will store data:

HTTP

POST https://[service name].search.windows.net/indexes?api-


version=2020-06-30

Content-Type: application/json

api-key: [Search service admin key]

"name": "mysearchindex",

"fields": [{

"name": "id",

"type": "Edm.String",

"key": true,

"searchable": false

},

"name": "description",

"type": "Edm.String",

"filterable": false,

"searchable": true,

"sortable": false,

"facetable": false,

"suggestions": true

2. Create a document key field ("key": true) that uniquely identifies each search
document. This is the only field that's required in a search index. Typically, the
table's primary key is mapped to the index key field. The document key must be
unique and non-null. The values can be numeric in source data, but in a search
index, a key is always a string.

3. Create more fields to add more searchable content. See Create an index for
guidance.

Mapping data types

SQL data type Cognitive Search Notes


field types

bit Edm.Boolean,
Edm.String

int, smallint, tinyint Edm.Int32, Edm.Int64,


Edm.String

bigint Edm.Int64, Edm.String

real, float Edm.Double,


Edm.String

smallmoney, money Edm.String Azure Cognitive Search doesn't support


decimal numeric converting decimal types into Edm.Double
because doing so would lose precision

char, nchar, varchar, Edm.String


A SQL string can be used to populate a
nvarchar Collection(Edm.String) Collection( Edm.String ) field if the string
represents a JSON array of strings: ["red",
"white", "blue"]

smalldatetime, datetime, Edm.DateTimeOffset,


datetime2, date, Edm.String
datetimeoffset

uniqueidentifer Edm.String

geography Edm.GeographyPoint Only geography instances of type POINT with


SRID 4326 (which is the default) are
supported

rowversion Not applicable Row-version columns can't be stored in the


search index, but they can be used for
change tracking
SQL data type Cognitive Search Notes
field types

time, timespan, binary, Not applicable Not supported


varbinary, image, xml,
geometry, CLR types

Configure and run the Azure SQL indexer


Once the index and data source have been created, you're ready to create the indexer.
Indexer configuration specifies the inputs, parameters, and properties controlling run
time behaviors.

1. Create or update an indexer by giving it a name and referencing the data source
and target index:

HTTP

POST https://[service name].search.windows.net/indexers?api-


version=2020-06-30

Content-Type: application/json

api-key: [search service admin key]

"name" : "[my-sqldb-indexer]",

"dataSourceName" : "[my-sqldb-ds]",

"targetIndexName" : "[my-search-index]",

"disabled": null,

"schedule": null,

"parameters": {

"batchSize": null,

"maxFailedItems": 0,

"maxFailedItemsPerBatch": 0,

"base64EncodeKeys": false,

"configuration": {

"queryTimeout": "00:04:00",

"convertHighWaterMarkToRowVersion": false,

"disableOrderByHighWaterMarkColumn": false

},

"fieldMappings": [],

"encryptionKey": null

2. Under parameters, the configuration section has parameters that are specific to
Azure SQL:

Default query timeout for SQL query execution is 5 minutes, which you can
override.
"convertHighWaterMarkToRowVersion" optimizes for the High Water Mark
change detection policy. Change detection policies are set in the data source.
If you're using the native change detection policy, this parameter has no
effect.

"disableOrderByHighWaterMarkColumn" causes the SQL query used by the


high water mark policy to omit the ORDER BY clause. If you're using the
native change detection policy, this parameter has no effect.

3. Specify field mappings if there are differences in field name or type, or if you need
multiple versions of a source field in the search index.

4. See Create an indexer for more information about other properties.

An indexer runs automatically when it's created. You can prevent this by setting
"disabled" to true. To control indexer execution, run an indexer on demand or put it on a
schedule.

Check indexer status


To monitor the indexer status and execution history, send a Get Indexer Status request:

HTTP

GET https://myservice.search.windows.net/indexers/myindexer/status?api-
version=2020-06-30

Content-Type: application/json

api-key: [admin key]

The response includes status and the number of items processed. It should look similar
to the following example:

JSON

"status":"running",

"lastResult": {

"status":"success",

"errorMessage":null,

"startTime":"2022-02-21T00:23:24.957Z",

"endTime":"2022-02-21T00:36:47.752Z",

"errors":[],

"itemsProcessed":1599501,

"itemsFailed":0,

"initialTrackingState":null,

"finalTrackingState":null

},

"executionHistory":

"status":"success",

"errorMessage":null,

"startTime":"2022-02-21T00:23:24.957Z",

"endTime":"2022-02-21T00:36:47.752Z",

"errors":[],

"itemsProcessed":1599501,

"itemsFailed":0,

"initialTrackingState":null,

"finalTrackingState":null

},

... earlier history items

Execution history contains up to 50 of the most recently completed executions, which


are sorted in the reverse chronological order so that the latest execution comes first.

Indexing new, changed, and deleted rows


If your SQL database supports change tracking, a search indexer can pick up just the
new and updated content on subsequent indexer runs.

To enable incremental indexing, set the "dataChangeDetectionPolicy" property in your


data source definition. This property tells the indexer which change tracking mechanism
is used on your table or view.

For Azure SQL indexers, there are two change detection policies:

"SqlIntegratedChangeTrackingPolicy" (applies to tables only)

"HighWaterMarkChangeDetectionPolicy" (works for tables and views)

SQL Integrated Change Tracking Policy


We recommend using "SqlIntegratedChangeTrackingPolicy" for its efficiency and its
ability to identify deleted rows.

Database requirements:

SQL Server 2012 SP3 and later, if you're using SQL Server on Azure VMs
Azure SQL Database or SQL Managed Instance
Tables only (no views)
On the database, enable change tracking for the table
No composite primary key (a primary key containing more than one column) on
the table
No clustered indexes on the table. As a workaround, any clustered index would
have to be dropped and re-created as nonclustered index, however, performance
may be affected in the source compared to having a clustered index

Change detection policies are added to data source definitions. To use this policy, create
or update your data source like this:

HTTP

POST https://myservice.search.windows.net/datasources?api-version=2020-06-30

Content-Type: application/json

api-key: admin-key

"name" : "myazuresqldatasource",

"type" : "azuresql",

"credentials" : { "connectionString" : "connection string" },

"container" : { "name" : "table name" },

"dataChangeDetectionPolicy" : {

"@odata.type" :
"#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy"

When using SQL integrated change tracking policy, don't specify a separate data
deletion detection policy. The SQL integrated change tracking policy has built-in
support for identifying deleted rows. However, for the deleted rows to be detected
automatically, the document key in your search index must be the same as the primary
key in the SQL table.

7 Note

When using TRUNCATE TABLE to remove a large number of rows from a SQL table,
the indexer needs to be reset to reset the change tracking state to pick up row
deletions.

High Water Mark Change Detection policy


This change detection policy relies on a "high water mark" column in your table or view
that captures the version or time when a row was last updated. If you're using a view,
you must use a high water mark policy.

The high water mark column must meet the following requirements:
All inserts specify a value for the column.
All updates to an item also change the value of the column.
The value of this column increases with each insert or update.
Queries with the following WHERE and ORDER BY clauses can be executed
efficiently: WHERE [High Water Mark Column] > [Current High Water Mark Value]
ORDER BY [High Water Mark Column]

7 Note

We strongly recommend using the rowversion data type for the high water mark
column. If any other data type is used, change tracking isn't guaranteed to capture
all changes in the presence of transactions executing concurrently with an indexer
query. When using rowversion in a configuration with read-only replicas, you must
point the indexer at the primary replica. Only a primary replica can be used for data
sync scenarios.

Change detection policies are added to data source definitions. To use this policy, create
or update your data source like this:

HTTP

POST https://myservice.search.windows.net/datasources?api-version=2020-06-30

Content-Type: application/json

api-key: admin-key

"name" : "myazuresqldatasource",

"type" : "azuresql",

"credentials" : { "connectionString" : "connection string" },

"container" : { "name" : "table or view name" },

"dataChangeDetectionPolicy" : {

"@odata.type" :
"#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",

"highWaterMarkColumnName" : "[a rowversion or last_updated


column name]"

7 Note

If the source table doesn't have an index on the high water mark column, queries
used by the SQL indexer may time out. In particular, the ORDER BY [High Water Mark
Column] clause requires an index to run efficiently when the table contains many

rows.
convertHighWaterMarkToRowVersion

If you're using a rowversion data type for the high water mark column, consider setting
the convertHighWaterMarkToRowVersion property in indexer configuration. Setting this
property to true results in the following behaviors:

Uses the rowversion data type for the high water mark column in the indexer SQL
query. Using the correct data type improves indexer query performance.

Subtracts one from the rowversion value before the indexer query runs. Views with
one-to-many joins may have rows with duplicate rowversion values. Subtracting
one ensures the indexer query doesn't miss these rows.

To enable this property, create or update the indexer with the following configuration:

HTTP

... other indexer definition properties

"parameters" : {

"configuration" : { "convertHighWaterMarkToRowVersion" : true }


}

queryTimeout

If you encounter timeout errors, set the queryTimeout indexer configuration setting to a
value higher than the default 5-minute timeout. For example, to set the timeout to 10
minutes, create or update the indexer with the following configuration:

HTTP

... other indexer definition properties

"parameters" : {

"configuration" : { "queryTimeout" : "00:10:00" } }

disableOrderByHighWaterMarkColumn

You can also disable the ORDER BY [High Water Mark Column] clause. However, this isn't
recommended because if the indexer execution is interrupted by an error, the indexer
has to re-process all rows if it runs later, even if the indexer has already processed
almost all the rows at the time it was interrupted. To disable the ORDER BY clause, use
the disableOrderByHighWaterMarkColumn setting in the indexer definition:

HTTP

... other indexer definition properties

"parameters" : {

"configuration" : { "disableOrderByHighWaterMarkColumn" : true }


}

Soft Delete Column Deletion Detection policy


When rows are deleted from the source table, you probably want to delete those rows
from the search index as well. If you use the SQL integrated change tracking policy, this
is taken care of for you. However, the high water mark change tracking policy doesn’t
help you with deleted rows. What to do?

If the rows are physically removed from the table, Azure Cognitive Search has no way to
infer the presence of records that no longer exist. However, you can use the “soft-
delete” technique to logically delete rows without removing them from the table. Add a
column to your table or view and mark rows as deleted using that column.

When using the soft-delete technique, you can specify the soft delete policy as follows
when creating or updating the data source:

HTTP

…,

"dataDeletionDetectionPolicy" : {

"@odata.type" :
"#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",

"softDeleteColumnName" : "[a column name]",

"softDeleteMarkerValue" : "[the value that indicates that a row


is deleted]"

The softDeleteMarkerValue must be a string in the JSON representation of your data


source. Use the string representation of your actual value. For example, if you have an
integer column where deleted rows are marked with the value 1, use "1" . If you have a
BIT column where deleted rows are marked with the Boolean true value, use the string
literal "True" or "true" , the case doesn't matter.
If you're setting up a soft delete policy from the Azure portal, don't add quotes around
the soft delete marker value. The field contents are already understood as a string and
will be translated automatically into a JSON string for you. In the examples above,
simply type 1 , True or true into the portal's field.

FAQ
Q: Can I index Always Encrypted columns?

No. Always Encrypted columns aren't currently supported by Cognitive Search indexers.

Q: Can I use Azure SQL indexer with SQL databases running on IaaS VMs in Azure?

Yes. However, you need to allow your search service to connect to your database. For
more information, see Configure a connection from an Azure Cognitive Search indexer
to SQL Server on an Azure VM.

Q: Can I use Azure SQL indexer with SQL databases running on-premises?

Not directly. We don't recommend or support a direct connection, as doing so would


require you to open your databases to Internet traffic. Customers have succeeded with
this scenario using bridge technologies like Azure Data Factory. For more information,
see Push data to an Azure Cognitive Search index using Azure Data Factory.

Q: Can I use a secondary replica in a failover cluster as a data source?

It depends. For full indexing of a table or view, you can use a secondary replica.

For incremental indexing, Azure Cognitive Search supports two change detection
policies: SQL integrated change tracking and High Water Mark.

On read-only replicas, SQL Database doesn't support integrated change tracking.


Therefore, you must use High Water Mark policy.

Our standard recommendation is to use the rowversion data type for the high water
mark column. However, using rowversion relies on the MIN_ACTIVE_ROWVERSION function,
which isn't supported on read-only replicas. Therefore, you must point the indexer to a
primary replica if you're using rowversion.

If you attempt to use rowversion on a read-only replica, you'll see the following error:

"Using a rowversion column for change tracking isn't supported on secondary (read-
only) availability replicas. Please update the datasource and specify a connection to the
primary availability replica. Current database 'Updateability' property is 'READ_ONLY'".
Q: Can I use an alternative, non-rowversion column for high water mark change
tracking?

It's not recommended. Only rowversion allows for reliable data synchronization.
However, depending on your application logic, it may be safe if:

You can ensure that when the indexer runs, there are no outstanding transactions
on the table that’s being indexed (for example, all table updates happen as a batch
on a schedule, and the Azure Cognitive Search indexer schedule is set to avoid
overlapping with the table update schedule).

You periodically do a full reindex to pick up any missed rows.


Common Language Runtime Integration
Article • 03/03/2023

Applies to:
SQL Server
Azure SQL Managed Instance

Microsoft SQL Server and Azure SQL Managed Instance enable you to implement some
of the functionalities with .NET languages using the native common language runtime
(CLR) integration as SQL Server server-side modules (procedures, functions, and
triggers). The CLR supplies managed code with services such as cross-language
integration, code access security, object lifetime management, and debugging and
profiling support. For SQL Server users and application developers, CLR integration
means that you can now write stored procedures, triggers, user-defined types, user-
defined functions (scalar and table valued), and user-defined aggregate functions using
any .NET Framework language, including Microsoft Visual Basic .NET and Microsoft
Visual C#. SQL Server includes the .NET Framework version 4 pre-installed.

2 Warning

CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer
supported as a security boundary. A CLR assembly created with PERMISSION_SET =
SAFE may be able to access external system resources, call unmanaged code, and

acquire sysadmin privileges. Beginning with SQL Server 2017 (14.x), an


sp_configure option called clr strict security is introduced to enhance the

security of CLR assemblies. clr strict security is enabled by default, and treats
SAFE and EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr

strict security option can be disabled for backward compatibility, but this is not

recommended. Microsoft recommends that all assemblies be signed by a certificate


or asymmetric key with a corresponding login that has been granted UNSAFE
ASSEMBLY permission in the master database. For more information, see CLR strict
security. SQL Server administrators can also add assemblies to a list of assemblies,
which the Database Engine should trust. For more information, see
sys.sp_add_trusted_assembly.

This 6-minute video shows you how to use CLR in Azure SQL Managed Instance:
https://channel9.msdn.com/Shows/Data-Exposed/Its-just-SQL-CLR-in-Azure-SQL-
Database-Managed-Instance/player?WT.mc_id=dataexposed-c9-
niner&nocookie=true&locale=en-us&embedUrl=%2Fsql%2Frelational-
databases%2Fclr-integration%2Fcommon-language-runtime-integration-overview
When to use CLR modules
CLR Integration enables you to implement complex features that are available in .NET
Framework such as regular expressions, code for accessing external resources (servers,
web services, databases), custom encryption, etc. Some of the benefits of the server-side
CLR integration are:

A better programming model. The .NET Framework languages are in many


respects richer than Transact-SQL, offering constructs and capabilities previously
not available to SQL Server developers. Developers may also leverage the power of
the .NET Framework Library, which provides an extensive set of classes that can be
used to quickly and efficiently solve programming problems.

Improved safety and security. Managed code runs in a common language run-
time environment, hosted by the Database Engine. SQL Server leverages this to
provide a safer and more secure alternative to the extended stored procedures
available in earlier versions of SQL Server.

Ability to define data types and aggregate functions. User-defined types and
user-defined aggregates are two new managed database objects that expand the
storage and querying capabilities of SQL Server.

Streamlined development through a standardized environment. Database


development is integrated into future releases of the Microsoft Visual Studio .NET
development environment. Developers use the same tools for developing and
debugging database objects and scripts as they use to write middle-tier or client-
tier .NET Framework components and services.

Potential for improved performance and scalability. In many situations, the .NET
Framework language compilation and execution models deliver improved
performance over Transact-SQL.

SQL Server language extensions provide an alternative execution environment for


runtimes close to the database engine. For a discussion of the differences between SQL
CLR and SQL language extensions, see Compare SQL Server Language Extensions to
SQL CLR.

The following table lists the topics in this section.

Overview of CLR Integration

Describes the kinds of objects that can be built using CLR integration. Also reviews the
requirements for building database objects using CLR integration.
What's New in CLR Integration

Describes the new features in this release.

Architecture of CLR Integration

Describes the design goals of CLR integration.

Enabling CLR Integration

Describes how to enable CLR integration.

See Also
Installing the .NET Framework (SQL Server only)

Performance of CLR Integration


Use Spring Data JDBC with Azure SQL
Database
Article • 04/19/2023

This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JDBC .

JDBC is the standard Java API to connect to traditional relational databases.

In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.

Azure AD authentication is a mechanism for connecting to Azure Database for SQL


Database using identities defined in Azure AD. With Azure AD authentication, you can
manage database user identities and other Microsoft services in a central location, which
simplifies permission management.

SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.

Prerequisites
An Azure subscription - create one for free .

Java Development Kit (JDK), version 8 or higher.

Apache Maven .

Azure CLI.

sqlcmd Utility.

ODBC Driver 17 or 18.

If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JDBC, and MS SQL Server Driver dependencies, and
then select Java version 8 or higher.

See the sample application


In this tutorial, you'll code a sample application. If you want to go faster, this application
is already coded and available at https://github.com/Azure-Samples/quickstart-spring-
data-jdbc-sql-server .

Configure a firewall rule for your Azure SQL


Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't
allow any incoming connection.

To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.

If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.

Create an SQL database non-admin user and


grant permission
This step will create a non-admin user and grant all permissions on the demo database
to it.

Passwordless (Recommended)

To use passwordless connections, see Tutorial: Secure a database in Azure SQL


Database or use Service Connector to create an Azure AD admin user for your
Azure SQL Database server, as shown in the following steps:

1. First, install the Service Connector passwordless extension for the Azure CLI:

Azure CLI
az extension add --name serviceconnector-passwordless --upgrade

2. Then, use the following command to create the Azure AD non-admin user:

Azure CLI

az connection create sql \

--resource-group <your-resource-group-name> \

--connection sql_conn \

--target-resource-group <your-resource-group-name> \

--server sqlservertest \

--database demo \

--user-account \

--query authInfo.userName \

--output tsv

The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.

) Important

Azure SQL database passwordless connections require upgrading the MS SQL


Server Driver to version 12.1.0 or higher. The connection option is
authentication=DefaultAzureCredential in version 12.1.0 and

authentication=ActiveDirectoryDefault in version 12.2.0 .

Store data from Azure SQL Database


With an Azure SQL Database instance, you can store data by using Spring Cloud Azure.

To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:

The Spring Cloud Azure Bill of Materials (BOM):

XML

<dependencyManagement>

<dependencies>

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-dependencies</artifactId>

<version>4.9.0</version>

<type>pom</type>

<scope>import</scope>

</dependency>

</dependencies>

</dependencyManagement>

7 Note

If you're using Spring Boot 3.x, be sure to set the spring-cloud-azure-


dependencies version to 5.3.0 .
For more information about the spring-cloud-

azure-dependencies version, see Which Version of Spring Cloud Azure Should


I Use .

The Spring Cloud Azure Starter artifact:

XML

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-starter</artifactId>

</dependency>

Configure Spring Boot to use Azure SQL Database


To store data from Azure SQL Database using Spring Data JDBC, follow these steps to
configure the application:

1. Configure an Azure SQL Database credentials in the application.properties


configuration file.

Passwordless (Recommended)

properties

logging.level.org.springframework.jdbc.core=DEBUG

spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;

spring.sql.init.mode=always

2 Warning

The configuration property spring.sql.init.mode=always means that Spring


Boot will automatically generate a database schema, using the schema.sql file
that you'll create next, each time the server is started. This is great for testing,
but remember that this will delete your data at each restart, so you shouldn't
use it in production.

2. Create the src/main/resources/schema.sql configuration file to configure the


database schema, then add the following contents.

SQL

DROP TABLE IF EXISTS todo;

CREATE TABLE todo (id INT IDENTITY PRIMARY KEY, description


VARCHAR(255), details VARCHAR(4096), done BIT);

3. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by Spring Boot. The following code ignores
the getters and setters methods.

Java

import org.springframework.data.annotation.Id;

public class Todo {

public Todo() {

public Todo(String description, String details, boolean done) {

this.description = description;

this.details = details;

this.done = done;

@Id

private Long id;

private String description;

private String details;

private boolean done;

4. Edit the startup class file to show the following content.

Java

import org.springframework.boot.SpringApplication;

import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.boot.context.event.ApplicationReadyEvent;

import org.springframework.context.ApplicationListener;

import org.springframework.context.annotation.Bean;

import org.springframework.data.repository.CrudRepository;

import java.util.stream.Stream;

@SpringBootApplication

public class DemoApplication {

public static void main(String[] args) {

SpringApplication.run(DemoApplication.class, args);

@Bean

ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {

return event->repository

.saveAll(Stream.of("A", "B", "C").map(name->new


Todo("configuration", "congratulations, you have set up correctly!",
true)).toList())

.forEach(System.out::println);

interface TodoRepository extends CrudRepository<Todo, Long> {

 Tip

In this tutorial, there are no authentication operations in the configurations or


the code. However, connecting to Azure services requires authentication. To
complete the authentication, you need to use Azure Identity. Spring Cloud
Azure uses DefaultAzureCredential , which the Azure Identity library provides
to help you get credentials without any code changes.

DefaultAzureCredential supports multiple authentication methods and


determines which method to use at runtime. This approach enables your app
to use different authentication methods in different environments (such as
local and production environments) without implementing environment-
specific code. For more information, see the Default Azure credential section
of Authenticate Azure-hosted Java applications.

To complete the authentication in local development environments, you can


use Azure CLI, Visual Studio Code, PowerShell or other methods. For more
information, see Azure authentication in Java development environments. To
complete the authentication in Azure hosting environments, we recommend
using managed identity. For more information, see What are managed
identities for Azure resources?

5. Start the application. The application stores data into the database. You'll see logs
similar to the following example:

shell

2023-02-01 10:22:36.701 DEBUG 7948 --- [main]


o.s.jdbc.core.JdbcTemplate : Executing prepared SQL statement [INSERT
INTO todo (description, details, done) VALUES (?, ?, ?)]

com.example.demo.Todo@4bdb04c8

Deploy to Azure Spring Apps


Now that you have the Spring Boot application running locally, it's time to move it to
production. Azure Spring Apps makes it easy to deploy Spring Boot applications to
Azure without any code changes. The service manages the infrastructure of Spring
applications so developers can focus on their code. Azure Spring Apps provides lifecycle
management using comprehensive monitoring and diagnostics, configuration
management, service discovery, CI/CD integration, blue-green deployments, and more.
To deploy your application to Azure Spring Apps, see Deploy your first application to
Azure Spring Apps.

Next steps
Azure for Spring developers
Use Spring Data JPA with Azure SQL
Database
Article • 04/19/2023

This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JPA .

The Java Persistence API (JPA) is the standard Java API for object-relational mapping.

In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.

Azure AD authentication is a mechanism for connecting to Azure Database for SQL


Database using identities defined in Azure AD. With Azure AD authentication, you can
manage database user identities and other Microsoft services in a central location, which
simplifies permission management.

SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.

Prerequisites
An Azure subscription - create one for free .

Java Development Kit (JDK), version 8 or higher.

Apache Maven .

Azure CLI.

sqlcmd Utility

ODBC Driver 17 or 18.

If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JPA, and MS SQL Server Driver dependencies, and then
select Java version 8 or higher.

) Important

To use passwordless connections, upgrade MS SQL Server Driver to version


12.1.0 or higher, and then create an Azure AD admin user for your Azure SQL
Database server instance. For more information, see the Create an Azure AD admin
section of Tutorial: Secure a database in Azure SQL Database.

See the sample application


In this tutorial, you'll code a sample application. If you want to go faster, this application
is already coded and available at https://github.com/Azure-Samples/quickstart-spring-
data-jpa-sql-server .

Configure a firewall rule for your Azure SQL


Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't
allow any incoming connection.

To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.

If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.

Create an SQL database non-admin user and


grant permission
This step will create a non-admin user and grant all permissions on the demo database
to it.

Passwordless (Recommended)
To use passwordless connections, see Tutorial: Secure a database in Azure SQL
Database or use Service Connector to create an Azure AD admin user for your
Azure SQL Database server, as shown in the following steps:

1. First, install the Service Connector passwordless extension for the Azure CLI:

Azure CLI

az extension add --name serviceconnector-passwordless --upgrade

2. Then, use the following command to create the Azure AD non-admin user:

Azure CLI

az connection create sql \

--resource-group <your-resource-group-name> \

--connection sql_conn \

--target-resource-group <your-resource-group-name> \

--server sqlservertest \

--database demo \

--user-account \

--query authInfo.userName \

--output tsv

The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.

) Important

Azure SQL database passwordless connections require upgrading the MS SQL


Server Driver to version 12.1.0 or higher. The connection option is
authentication=DefaultAzureCredential in version 12.1.0 and
authentication=ActiveDirectoryDefault in version 12.2.0 .

Store data from Azure SQL Database


With an Azure SQL Database instance, you can store data by using Spring Cloud Azure.

To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:

The Spring Cloud Azure Bill of Materials (BOM):


XML

<dependencyManagement>

<dependencies>

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-dependencies</artifactId>

<version>4.9.0</version>

<type>pom</type>

<scope>import</scope>

</dependency>

</dependencies>

</dependencyManagement>

7 Note

If you're using Spring Boot 3.x, be sure to set the spring-cloud-azure-


dependencies version to 5.3.0 .
For more information about the spring-cloud-

azure-dependencies version, see Which Version of Spring Cloud Azure Should


I Use .

The Spring Cloud Azure Starter artifact:

XML

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-starter</artifactId>

</dependency>

Configure Spring Boot to use Azure SQL Database


To store data from Azure SQL Database using Spring Data JPA, follow these steps to
configure the application:

1. Configure an Azure SQL Database credentials in the application.properties


configuration file.

Passwordless (Recommended)

properties

logging.level.org.hibernate.SQL=DEBUG

spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;

spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLSe
rver2016Dialect

spring.jpa.hibernate.ddl-auto=create-drop

2 Warning

The configuration property spring.jpa.hibernate.ddl-auto=create-drop


means that Spring Boot will automatically create a database schema at
application start-up, and will try to delete it when it shuts down. This feature is
great for testing, but remember that it will delete your data at each restart, so
you shouldn't use it in production.

2. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by JPA. The following code ignores the
getters and setters methods.

Java

package com.example.demo;

import javax.persistence.Entity;

import javax.persistence.GeneratedValue;

import javax.persistence.Id;

@Entity

public class Todo {

public Todo() {

public Todo(String description, String details, boolean done) {

this.description = description;

this.details = details;

this.done = done;

@Id

@GeneratedValue

private Long id;

private String description;

private String details;

private boolean done;

3. Edit the startup class file to show the following content.

Java

import org.springframework.boot.SpringApplication;

import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.boot.context.event.ApplicationReadyEvent;

import org.springframework.context.ApplicationListener;

import org.springframework.context.annotation.Bean;

import org.springframework.data.jpa.repository.JpaRepository;

import java.util.stream.Collectors;

import java.util.stream.Stream;

@SpringBootApplication

public class DemoApplication {

public static void main(String[] args) {

SpringApplication.run(DemoApplication.class, args);

@Bean

ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {

return event->repository

.saveAll(Stream.of("A", "B", "C").map(name->new


Todo("configuration", "congratulations, you have set up correctly!",
true)).collect(Collectors.toList()))

.forEach(System.out::println);

interface TodoRepository extends JpaRepository<Todo, Long> {

 Tip

In this tutorial, there are no authentication operations in the configurations or


the code. However, connecting to Azure services requires authentication. To
complete the authentication, you need to use Azure Identity. Spring Cloud
Azure uses DefaultAzureCredential , which the Azure Identity library provides
to help you get credentials without any code changes.
DefaultAzureCredential supports multiple authentication methods and

determines which method to use at runtime. This approach enables your app
to use different authentication methods in different environments (such as
local and production environments) without implementing environment-
specific code. For more information, see the Default Azure credential section
of Authenticate Azure-hosted Java applications.

To complete the authentication in local development environments, you can


use Azure CLI, Visual Studio Code, PowerShell or other methods. For more
information, see Azure authentication in Java development environments. To
complete the authentication in Azure hosting environments, we recommend
using managed identity. For more information, see What are managed
identities for Azure resources?

4. Start the application. You'll see logs similar to the following example:

shell

2023-02-01 10:29:19.763 DEBUG 4392 --- [main] org.hibernate.SQL :


insert into todo (description, details, done, id) values (?, ?, ?, ?)

com.example.demo.Todo@1f

Deploy to Azure Spring Apps


Now that you have the Spring Boot application running locally, it's time to move it to
production. Azure Spring Apps makes it easy to deploy Spring Boot applications to
Azure without any code changes. The service manages the infrastructure of Spring
applications so developers can focus on their code. Azure Spring Apps provides lifecycle
management using comprehensive monitoring and diagnostics, configuration
management, service discovery, CI/CD integration, blue-green deployments, and more.
To deploy your application to Azure Spring Apps, see Deploy your first application to
Azure Spring Apps.

Next steps
Azure for Spring developers
Use Spring Data R2DBC with Azure SQL
Database
Article • 05/26/2023

This article demonstrates creating a sample application that uses Spring Data R2DBC
to store and retrieve information in Azure SQL Database by using the R2DBC
implementation for Microsoft SQL Server from the r2dbc-mssql GitHub repository .

R2DBC brings reactive APIs to traditional relational databases. You can use it with
Spring WebFlux to create fully reactive Spring Boot applications that use non-blocking
APIs. It provides better scalability than the classic "one thread per connection" approach.

Prerequisites
An Azure subscription - create one for free .

Java Development Kit (JDK), version 8 or higher.

Apache Maven .

Azure CLI.

sqlcmd Utility.

cURL or a similar HTTP utility to test functionality.

See the sample application


In this article, you'll code a sample application. If you want to go faster, this application
is already coded and available at https://github.com/Azure-Samples/quickstart-spring-
data-r2dbc-sql-server .

Prepare the working environment


First, set up some environment variables by using the following commands:

Bash

export AZ_RESOURCE_GROUP=database-workshop
export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
export AZ_LOCATION=<YOUR_AZURE_REGION>
export AZ_SQL_SERVER_ADMIN_USERNAME=spring
export AZ_SQL_SERVER_ADMIN_PASSWORD=<YOUR_AZURE_SQL_ADMIN_PASSWORD>
export AZ_SQL_SERVER_NON_ADMIN_USERNAME=nonspring
export AZ_SQL_SERVER_NON_ADMIN_PASSWORD=<YOUR_AZURE_SQL_NON_ADMIN_PASSWORD>
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>

Replace the placeholders with the following values, which are used throughout this
article:

<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server, which should

be unique across Azure.


<YOUR_AZURE_REGION> : The Azure region you'll use. You can use eastus by default,

but we recommend that you configure a region closer to where you live. You can
see the full list of available regions by using az account list-locations .
<AZ_SQL_SERVER_ADMIN_PASSWORD> and <AZ_SQL_SERVER_NON_ADMIN_PASSWORD> : The

password of your Azure SQL Database server, which should have a minimum of
eight characters. The characters should be from three of the following categories:
English uppercase letters, English lowercase letters, numbers (0-9), and non-
alphanumeric characters (!, $, #, %, and so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll

run your Spring Boot application. One convenient way to find it is to open
whatismyip.akamai.com .

Next, create a resource group by using the following command:

Azure CLI

az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
--output tsv

Create an Azure SQL Database instance


Next, create a managed Azure SQL Database server instance by running the following
command.

7 Note

The MS SQL password has to meet specific criteria, and setup will fail with a non-
compliant password. For more information, see Password Policy.

Azure CLI
az sql server create \
--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME \
--location $AZ_LOCATION \
--admin-user $AZ_SQL_SERVER_ADMIN_USERNAME \
--admin-password $AZ_SQL_SERVER_ADMIN_PASSWORD \
--output tsv

Configure a firewall rule for your Azure SQL


Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't
allow any incoming connection. To be able to use your database, you need to add a
firewall rule that will allow the local IP address to access the database server.

Because you configured your local IP address at the beginning of this article, you can
open the server's firewall by running the following command:

Azure CLI

az sql server firewall-rule create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME-database-allow-local-ip \
--server $AZ_DATABASE_NAME \
--start-ip-address $AZ_LOCAL_IP_ADDRESS \
--end-ip-address $AZ_LOCAL_IP_ADDRESS \
--output tsv

If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.

Obtain the IP address of your host machine by running the following command in WSL:

Bash

cat /etc/resolv.conf

Copy the IP address following the term nameserver , then use the following command to
set an environment variable for the WSL IP Address:

Bash

export AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
Then, use the following command to open the server's firewall to your WSL-based app:

Azure CLI

az sql server firewall-rule create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME-database-allow-local-ip-wsl \
--server $AZ_DATABASE_NAME \
--start-ip-address $AZ_WSL_IP_ADDRESS \
--end-ip-address $AZ_WSL_IP_ADDRESS \
--output tsv

Configure an Azure SQL database


The Azure SQL Database server that you created earlier is empty. It doesn't have any
database that you can use with the Spring Boot application. Create a new database
called demo by running the following command:

Azure CLI

az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
--output tsv

Create an SQL database non-admin user and


grant permission
This step will create a non-admin user and grant all permissions on the demo database
to it.

Create a SQL script called create_user.sql for creating a non-admin user. Add the
following contents and save it locally:

Bash

cat << EOF > create_user.sql


USE demo;
GO
CREATE USER $AZ_SQL_SERVER_NON_ADMIN_USERNAME WITH
PASSWORD='$AZ_SQL_SERVER_NON_ADMIN_PASSWORD'
GO
GRANT CONTROL ON DATABASE::demo TO $AZ_SQL_SERVER_NON_ADMIN_USERNAME;
GO
EOF

Then, use the following command to run the SQL script to create the non-admin user:

Bash

sqlcmd -S $AZ_DATABASE_NAME.database.windows.net,1433 -d demo -U


$AZ_SQL_SERVER_ADMIN_USERNAME -P $AZ_SQL_SERVER_ADMIN_PASSWORD -i
create_user.sql

7 Note

For more information about creating SQL database users, see CREATE USER
(Transact-SQL).

Create a reactive Spring Boot application


To create a reactive Spring Boot application, we'll use Spring Initializr . The application
that we'll create uses:

Spring Boot 2.7.11.


The following dependencies: Spring Reactive Web (also known as Spring WebFlux)
and Spring Data R2DBC.

Generate the application by using Spring


Initializr
Generate the application on the command line by running the following command:

Bash

curl https://start.spring.io/starter.tgz -d dependencies=webflux,data-r2dbc


-d baseDir=azure-database-workshop -d bootVersion=2.7.11 -d javaVersion=17 |
tar -xzvf -

Add the reactive Azure SQL Database driver


implementation
Open the generated project's pom.xml file to add the reactive Azure SQL Database
driver from the r2dbc-mssql GitHub repository .

After the spring-boot-starter-webflux dependency, add the following text:

XML

<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-mssql</artifactId>
<scope>runtime</scope>
</dependency>

Configure Spring Boot to use Azure SQL Database


Open the src/main/resources/application.properties file, and add the following text:

properties

logging.level.org.springframework.data.r2dbc=DEBUG

spring.r2dbc.url=r2dbc:pool:mssql://$AZ_DATABASE_NAME.database.windows.net:1
433/demo
spring.r2dbc.username=nonspring@$AZ_DATABASE_NAME
spring.r2dbc.password=$AZ_SQL_SERVER_NON_ADMIN_PASSWORD

Replace the two $AZ_DATABASE_NAME variables and the


$AZ_SQL_SERVER_NON_ADMIN_PASSWORD variable with the values that you configured at the

beginning of this article.

7 Note

For better performance, the spring.r2dbc.url property is configured to use a


connection pool using r2dbc-pool .

You should now be able to start your application by using the provided Maven wrapper
as follows:

Bash

./mvnw spring-boot:run

Here's a screenshot of the application running for the first time:


Create the database schema


Inside the main DemoApplication class, configure a new Spring bean that will create a
database schema, using the following code:

Java

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.core.io.ClassPathResource;
import
org.springframework.data.r2dbc.connectionfactory.init.ConnectionFactoryIniti
alizer;
import
org.springframework.data.r2dbc.connectionfactory.init.ResourceDatabasePopula
tor;

import io.r2dbc.spi.ConnectionFactory;

@SpringBootApplication
public class DemoApplication {

public static void main(String[] args) {


SpringApplication.run(DemoApplication.class, args);
}

@Bean
public ConnectionFactoryInitializer initializer(ConnectionFactory
connectionFactory) {
ConnectionFactoryInitializer initializer = new
ConnectionFactoryInitializer();
initializer.setConnectionFactory(connectionFactory);
ResourceDatabasePopulator populator = new
ResourceDatabasePopulator(new ClassPathResource("schema.sql"));
initializer.setDatabasePopulator(populator);
return initializer;
}
}
This Spring bean uses a file called schema.sql, so create that file in the
src/main/resources folder, and add the following text:

SQL

DROP TABLE IF EXISTS todo;


CREATE TABLE todo (id INT IDENTITY PRIMARY KEY, description VARCHAR(255),
details VARCHAR(4096), done BIT);

Stop the running application, and start it again using the following command. The
application will now use the demo database that you created earlier, and create a todo
table inside it.

Bash

./mvnw spring-boot:run

Here's a screenshot of the database table as it's being created:

Code the application


Next, add the Java code that will use R2DBC to store and retrieve data from your Azure
SQL Database server.

Create a new Todo Java class, next to the DemoApplication class, using the following
code:

Java

package com.example.demo;

import org.springframework.data.annotation.Id;

public class Todo {

public Todo() {
}

public Todo(String description, String details, boolean done) {


this.description = description;
this.details = details;
this.done = done;
}

@Id
private Long id;

private String description;

private String details;

private boolean done;

public Long getId() {


return id;
}

public void setId(Long id) {


this.id = id;
}

public String getDescription() {


return description;
}

public void setDescription(String description) {


this.description = description;
}

public String getDetails() {


return details;
}

public void setDetails(String details) {


this.details = details;
}

public boolean isDone() {


return done;
}

public void setDone(boolean done) {


this.done = done;
}
}

This class is a domain model mapped on the todo table that you created before.

To manage that class, you need a repository. Define a new TodoRepository interface in
the same package, using the following code:

Java
package com.example.demo;

import org.springframework.data.repository.reactive.ReactiveCrudRepository;

public interface TodoRepository extends ReactiveCrudRepository<Todo, Long> {


}

This repository is a reactive repository that Spring Data R2DBC manages.

Finish the application by creating a controller that can store and retrieve data.
Implement a TodoController class in the same package, and add the following code:

Java

package com.example.demo;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping("/")
public class TodoController {

private final TodoRepository todoRepository;

public TodoController(TodoRepository todoRepository) {


this.todoRepository = todoRepository;
}

@PostMapping("/")
@ResponseStatus(HttpStatus.CREATED)
public Mono<Todo> createTodo(@RequestBody Todo todo) {
return todoRepository.save(todo);
}

@GetMapping("/")
public Flux<Todo> getTodos() {
return todoRepository.findAll();
}
}

Finally, halt the application and start it again using the following command:

Bash

./mvnw spring-boot:run
Test the application
To test the application, you can use cURL.

First, create a new "todo" item in the database using the following command:

Bash

curl --header "Content-Type: application/json" \


--request POST \
--data '{"description":"configuration","details":"congratulations, you
have set up R2DBC correctly!","done": "true"}' \
http://127.0.0.1:8080

This command should return the created item, as shown here:

JSON

{"id":1,"description":"configuration","details":"congratulations, you have


set up R2DBC correctly!","done":true}

Next, retrieve the data by using a new cURL request with the following command:

Bash

curl http://127.0.0.1:8080

This command will return the list of "todo" items, including the item you've created, as
shown here:

JSON

[{"id":1,"description":"configuration","details":"congratulations, you have


set up R2DBC correctly!","done":true}]

Here's a screenshot of these cURL requests:

Congratulations! You've created a fully reactive Spring Boot application that uses R2DBC
to store and retrieve data from Azure SQL Database.
Clean up resources
To clean up all resources used during this quickstart, delete the resource group by using
the following command:

Azure CLI

az group delete \
--name $AZ_RESOURCE_GROUP \
--yes

Next steps
To learn more about deploying a Spring Data application to Azure Spring Apps and
using managed identity, see Tutorial: Deploy a Spring application to Azure Spring Apps
with a passwordless connection to an Azure database.

To learn more about Spring and Azure, continue to the Spring on Azure documentation
center.

Spring on Azure

See also
For more information about Spring Data R2DBC, see Spring's reference
documentation .

For more information about using Azure with Java, see Azure for Java developers and
Working with Azure DevOps and Java.
Tutorial: Migrate SQL Server to Azure
SQL Database using DMS (classic)
Article • 03/08/2023

) Important

Azure Database Migration Service (classic) - SQL scenarios are on a deprecation


path . Beginning 01 August 2023, you will no longer be able to create new
Database Migration Service (classic) resource for SQL Server scenarios from Azure
portal and will be retired on 15 March 2026 for all customers. Please migrate to
Azure SQL database services by using the latest Azure Database Migration
Service version which is available as an extension in Azure Data Studio,or by
using Azure PowerShell and Azure CLI. Learn more .

7 Note

This tutorial uses an older version of the Azure Database Migration Service. For
improved functionality and supportability, consider migrating to Azure SQL
Database by using the Azure SQL migration extension for Azure Data Studio.

To compare features between versions, review compare versions.

You can use Azure Database Migration Service to migrate the databases from a SQL
Server instance to Azure SQL Database. In this tutorial, you migrate the
AdventureWorks2016 database restored to an on-premises instance of SQL Server 2016
(or later) to a single database or pooled database in Azure SQL Database by using Azure
Database Migration Service.

You will learn how to:

" Assess and evaluate your on-premises database for any blocking issues by using
the Data Migration Assistant.
" Use the Data Migration Assistant to migrate the database sample schema.
" Register the Azure DataMigration resource provider.
" Create an instance of Azure Database Migration Service.
" Create a migration project by using Azure Database Migration Service.
" Run the migration.
" Monitor the migration.
Prerequisites
To complete this tutorial, you need to:

Download and install SQL Server 2016 or later .

Enable the TCP/IP protocol, which is disabled by default during SQL Server Express
installation, by following the instructions in the article Enable or Disable a Server
Network Protocol.

Restore the AdventureWorks2016 database to the SQL Server instance.

Create a database in Azure SQL Database, which you do by following the details in
the article Create a database in Azure SQL Database using the Azure portal. For
purposes of this tutorial, the name of the Azure SQL Database is assumed to be
AdventureWorksAzure, but you can provide whatever name you wish.

7 Note

If you use SQL Server Integration Services (SSIS) and want to migrate the
catalog database for your SSIS projects/packages (SSISDB) from SQL Server to
Azure SQL Database, the destination SSISDB will be created and managed
automatically on your behalf when you provision SSIS in Azure Data Factory
(ADF). For more information about migrating SSIS packages, see the article
Migrate SQL Server Integration Services packages to Azure.

Download and install the latest version of the Data Migration Assistant .

Create a Microsoft Azure Virtual Network for Azure Database Migration Service by
using the Azure Resource Manager deployment model, which provides site-to-site
connectivity to your on-premises source servers by using either ExpressRoute or
VPN. For more information about creating a virtual network, see the Virtual
Network Documentation, and especially the quickstart articles with step-by-step
details.

7 Note

During virtual network setup, if you use ExpressRoute with network peering to
Microsoft, add the following service endpoints to the subnet in which the
service will be provisioned:
Target database endpoint (for example, SQL endpoint, Azure Cosmos DB
endpoint, and so on)
Storage endpoint
Service bus endpoint

This configuration is necessary because Azure Database Migration Service


lacks internet connectivity.

If you don’t have site-to-site connectivity between the on-premises network


and Azure or if there is limited site-to-site connectivity bandwidth, consider
using Azure Database Migration Service in hybrid mode (Preview). Hybrid
mode leverages an on-premises migration worker together with an instance
of Azure Database Migration Service running in the cloud. To create an
instance of Azure Database Migration Service in hybrid mode, see the article
Create an instance of Azure Database Migration Service in hybrid mode
using the Azure portal.

Ensure that your virtual network Network Security Group outbound security rules
don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and
AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see
the article Filter network traffic with network security groups.

Configure your Windows Firewall for database engine access.

Open your Windows firewall to allow Azure Database Migration Service to access
the source SQL Server, which by default is TCP port 1433. If your default instance is
listening on some other port, add that to the firewall.

If you're running multiple named SQL Server instances using dynamic ports, you
may wish to enable the SQL Browser Service and allow access to UDP port 1434
through your firewalls so that Azure Database Migration Service can connect to a
named instance on your source server.

When using a firewall appliance in front of your source database(s), you may need
to add firewall rules to allow Azure Database Migration Service to access the
source database(s) for migration.

Create a server-level IP firewall rule for Azure SQL Database to allow Azure
Database Migration Service access to the target databases. Provide the subnet
range of the virtual network used for Azure Database Migration Service.

Ensure that the credentials used to connect to source SQL Server instance have
CONTROL SERVER permissions.
Ensure that the credentials used to connect to target Azure SQL Database instance
have CONTROL DATABASE permission on the target databases.

) Important

Creating an instance of Azure Database Migration Service requires access to


virtual network settings that are normally not within the same resource group.
As a result, the user creating an instance of DMS requires permission at
subscription level. To create the required roles, which you can assign as
needed, run the following script:

$readerActions = `

"Microsoft.Network/networkInterfaces/ipConfigurations/read", `

"Microsoft.DataMigration/*/read", `

"Microsoft.Resources/subscriptions/resourceGroups/read"

$writerActions = `

"Microsoft.DataMigration/services/*/write", `

"Microsoft.DataMigration/services/*/delete", `

"Microsoft.DataMigration/services/*/action", `

"Microsoft.Network/virtualNetworks/subnets/join/action", `

"Microsoft.Network/virtualNetworks/write", `

"Microsoft.Network/virtualNetworks/read", `

"Microsoft.Resources/deployments/validate/action", `

"Microsoft.Resources/deployments/*/read", `

"Microsoft.Resources/deployments/*/write"

$writerActions += $readerActions

# TODO: replace with actual subscription IDs

$subScopes = ,"/subscriptions/00000000-0000-0000-0000-
000000000000/","/subscriptions/11111111-1111-1111-1111-
111111111111/"

function New-DmsReaderRole() {

$aRole =
[Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefi
nition]::new()

$aRole.Name = "Azure Database Migration Reader"

$aRole.Description = "Lets you perform read only actions on DMS


service/project/tasks."

$aRole.IsCustom = $true

$aRole.Actions = $readerActions

$aRole.NotActions = @()

$aRole.AssignableScopes = $subScopes

#Create the role

New-AzRoleDefinition -Role $aRole


}

function New-DmsContributorRole() {

$aRole =
[Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefi
nition]::new()

$aRole.Name = "Azure Database Migration Contributor"

$aRole.Description = "Lets you perform CRUD actions on DMS


service/project/tasks."

$aRole.IsCustom = $true

$aRole.Actions = $writerActions

$aRole.NotActions = @()

$aRole.AssignableScopes = $subScopes

#Create the role

New-AzRoleDefinition -Role $aRole


}

function Update-DmsReaderRole() {
$aRole = Get-AzRoleDefinition "Azure Database Migration Reader"

$aRole.Actions = $readerActions

$aRole.NotActions = @()

Set-AzRoleDefinition -Role $aRole


}

function Update-DmsConributorRole() {

$aRole = Get-AzRoleDefinition "Azure Database Migration


Contributor"

$aRole.Actions = $writerActions

$aRole.NotActions = @()

Set-AzRoleDefinition -Role $aRole


}

# Invoke above functions

New-DmsReaderRole

New-DmsContributorRole

Update-DmsReaderRole

Update-DmsConributorRole

Assess your on-premises database


Before you can migrate data from a SQL Server instance to a single database or pooled
database in Azure SQL Database, you need to assess the SQL Server database for any
blocking issues that might prevent migration. Using the Data Migration Assistant, follow
the steps described in the article Performing a SQL Server migration assessment to
complete the on-premises database assessment. A summary of the required steps
follows:
1. In the Data Migration Assistant, select the New (+) icon, and then select the
Assessment project type.

2. Specify a project name. From the Assessment type drop-down list, select Database
Engine, in the Source server type text box, select SQL Server, in the Target server
type text box, select Azure SQL Database, and then select Create to create the
project.

When you're assessing the source SQL Server database migrating to a single
database or pooled database in Azure SQL Database, you can choose one or both
of the following assessment report types:

Check database compatibility


Check feature parity

Both report types are selected by default.

3. In the Data Migration Assistant, on the Options screen, select Next.

4. On the Select sources screen, in the Connect to a server dialog box, provide the
connection details to your SQL Server, and then select Connect.

5. In the Add sources dialog box, select AdventureWorks2016, select Add, and then
select Start Assessment.

7 Note

If you use SSIS, DMA does not currently support the assessment of the source
SSISDB. However, SSIS projects/packages will be assessed/validated as they
are redeployed to the destination SSISDB hosted by Azure SQL Database. For
more information about migrating SSIS packages, see the article Migrate SQL
Server Integration Services packages to Azure.

When the assessment is complete, the results display as shown in the following
graphic:
For databases in Azure SQL Database, the assessments identify feature parity
issues and migration blocking issues for deploying to a single database or pooled
database.

The SQL Server feature parity category provides a comprehensive set of


recommendations, alternative approaches available in Azure, and mitigating
steps to help you plan the effort into your migration projects.
The Compatibility issues category identifies partially supported or
unsupported features that reflect compatibility issues that might block
migrating SQL Server database(s) to Azure SQL Database. Recommendations
are also provided to help you address those issues.

6. Review the assessment results for migration blocking issues and feature parity
issues by selecting the specific options.

Migrate the sample schema


After you're comfortable with the assessment and satisfied that the selected database is
a viable candidate for migration to a single database or pooled database in Azure SQL
Database, use DMA to migrate the schema to Azure SQL Database.

7 Note

Before you create a migration project in Data Migration Assistant, be sure that you
have already provisioned a database in Azure as mentioned in the prerequisites.

) Important
If you use SSIS, DMA does not currently support the migration of source SSISDB,
but you can redeploy your SSIS projects/packages to the destination SSISDB hosted
by Azure SQL Database. For more information about migrating SSIS packages, see
the article Migrate SQL Server Integration Services packages to Azure.

To migrate the AdventureWorks2016 schema to a single database or pooled database


Azure SQL Database, perform the following steps:

1. In the Data Migration Assistant, select the New (+) icon, and then under Project
type, select Migration.

2. Specify a project name, in the Source server type text box, select SQL Server, and
then in the Target server type text box, select Azure SQL Database.

3. Under Migration Scope, select Schema only.

After performing the previous steps, the Data Migration Assistant interface should
appear as shown in the following graphic:

4. Select Create to create the project.

5. In the Data Migration Assistant, specify the source connection details for your SQL
Server, select Connect, and then select the AdventureWorks2016 database.
6. Select Next, under Connect to target server, specify the target connection details
for the Azure SQL Database, select Connect, and then select the
AdventureWorksAzure database you had pre-provisioned in Azure SQL Database.

7. Select Next to advance to the Select objects screen, on which you can specify the
schema objects in the AdventureWorks2016 database that need to be deployed to
Azure SQL Database.

By default, all objects are selected.


8. Select Generate SQL script to create the SQL scripts, and then review the scripts
for any errors.

9. Select Deploy schema to deploy the schema to Azure SQL Database, and then
after the schema is deployed, check the target server for any anomalies.
Register the resource provider
Register the Microsoft.DataMigration resource provider before you create your first
instance of the Database Migration Service.

1. Sign in to the Azure portal. Search for and select Subscriptions.

2. Select the subscription in which you want to create the instance of Azure Database
Migration Service, and then select Resource providers.
3. Search for migration, and then select Register for Microsoft.DataMigration.

Create an Azure Database Migration Service


instance
1. In the Azure portal menu or on the Home page, select Create a resource. Search
for and select Azure Database Migration Service.
2. On the Azure Database Migration Service screen, select Create.

Select the appropriate Source server type and Target server type, and choose the
Database Migration Service (Classic) option.
3. On the Create Migration Service basics screen:

Select the subscription.


Create a new resource group or choose an existing one.
Specify a name for the instance of the Azure Database Migration Service.
Select the location in which you want to create the instance of Azure
Database Migration Service.
Choose Azure as the service mode.
Select a pricing tier. For more information on costs and pricing tiers, see the
pricing page .
Select Next: Networking.

4. On the Create Migration Service networking screen:

Select an existing virtual network or create a new one. The virtual network
provides Azure Database Migration Service with access to the source server
and the target instance. For more information about how to create a virtual
network in the Azure portal, see the article Create a virtual network using the
Azure portal.
Select Review + Create to review the details and then select Create to create
the service.

After a few moments, your instance of the Azure Database Migration service
is created and ready to use:

Create a migration project


After the service is created, locate it within the Azure portal, open it, and then create a
new migration project.
1. In the Azure portal menu, select All services. Search for and select Azure Database
Migration Services.

2. On the Azure Database Migration Services screen, select the Azure Database
Migration Service instance that you created.

3. Select New Migration Project.

4. On the New migration project screen, specify a name for the project, in the
Source server type text box, select SQL Server, in the Target server type text box,
select Azure SQL Database, and then for Choose Migration activity type, select
Data migration.
5. Select Create and run activity to create the project and run the migration activity.

Specify source details


1. On the Select source screen, specify the connection details for the source SQL
Server instance.

Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server
instance name. You can also use the IP Address for situations in which DNS name
resolution isn't possible.

2. If you have not installed a trusted certificate on your source server, select the Trust
server certificate check box.

When a trusted certificate is not installed, SQL Server generates a self-signed


certificate when the instance is started. This certificate is used to encrypt the
credentials for client connections.
U Caution

TLS connections that are encrypted using a self-signed certificate do not


provide strong security. They are susceptible to man-in-the-middle attacks.
You should not rely on TLS using self-signed certificates in a production
environment or on servers that are connected to the internet.

) Important

If you use SSIS, DMS does not currently support the migration of source
SSISDB, but you can redeploy your SSIS projects/packages to the destination
SSISDB hosted by Azure SQL Database. For more information about migrating
SSIS packages, see the article Migrate SQL Server Integration Services
packages to Azure.

3. Select Next: Select databases.

Select databases for migration


Select either all databases or specific databases that you want to migrate to Azure SQL
Database. DMS provides you with the expected migration time for selected databases. If
the migration downtimes are acceptable continue with the migration. If the migration
downtimes are not acceptable, consider migrating to SQL Managed Instance with near-
zero downtime or submit ideas/suggestions for improvement, and other feedback in the
Azure Community forum — Azure Database Migration Service .

1. Choose the database(s) you want to migrate from the list of available databases.

2. Review the expected downtime. If it's acceptable, select Next: Select target >>

Specify target details


1. On the Select target screen, provide authentication settings to your Azure SQL
Database.
7 Note

Currently, SQL authentication is the only supported authentication type.

2. Select Next: Map to target databases screen, map the source and the target
database for migration.

If the target database contains the same database name as the source database,
Azure Database Migration Service selects the target database by default.
3. Select Next: Configuration migration settings, expand the table listing, and then
review the list of affected fields.

Azure Database Migration Service auto selects all the empty source tables that
exist on the target Azure SQL Database instance. If you want to remigrate tables
that already include data, you need to explicitly select the tables on this blade.

4. Select Next: Summary, review the migration configuration and in the Activity
name text box, specify a name for the migration activity.
Run the migration
Select Start migration.

The migration activity window appears, and the Status of the activity is Pending.

Monitor the migration


1. On the migration activity screen, select Refresh to update the display until the
Status of the migration shows as Completed.

2. Verify the target database(s) on the target Azure SQL Database.

Additional resources
For information about Azure Database Migration Service, see the article What is
Azure Database Migration Service?.
For information about Azure SQL Database, see the article What is the Azure SQL
Database service?.
SQL Database Projects extension
Article • 04/13/2023

The SQL Database Projects extension is an Azure Data Studio and Visual Studio Code
extension for developing SQL databases in a project-based development environment.
Compatible databases include SQL Server, Azure SQL Database, Azure SQL Managed
Instance, and Azure Synapse SQL. A SQL project is a local representation of SQL objects
that comprise the schema for a single database, such as tables, stored procedures, or
functions. When a SQL Database project is built, the output artifact is a .dacpac file. New
and existing databases can be updated to match the contents of the .dacpac by
publishing the SQL Database project with the SQL Database Projects extension or by
publishing the .dacpac with the command line interface SqlPackage.

Extension features
The SQL Database Projects extension provides the following features:

Create a new blank project.


Create a new project from a connected database.
Open a project previously created in Azure Data Studio, Visual Studio Code or in
SQL Server Data Tools.
Edit a project by adding or removing objects (tables, views, stored procedures) or
custom scripts in the project.
Organize files/scripts in folders.
Add references to system databases or a user dacpac.
Build a single project.
Deploy a single project.
Load connection details (SQL Windows authentication) and SQLCMD variables
from deployment profile.

The following features in the SQL Database Projects extension are currently in preview:

Create new projects from an OpenAPI specification file.


SDK-style SQL projects (Microsoft.Build.Sql ).

Watch this short 10-minute video for an introduction to the SQL Database Projects
extension in Azure Data Studio:
https://channel9.msdn.com/Shows/Data-Exposed/Build-SQL-Database-Projects-Easily-
in-Azure-Data-Studio/player?WT.mc_id=dataexposed-c9-
niner&nocookie=true&locale=en-us&embedUrl=%2Fsql%2Fazure-data-
studio%2Fextensions%2Fsql-database-project-extension

Install
You can install the SQL Database Project extension in Azure Data Studio and Visual
Studio Code.

Azure Data Studio


To install the SQL Database Project extension in Azure Data Studio, follow these steps:

1. Open the extensions manager to access the available extensions. To do so, either
select the extensions icon or select Extensions in the View menu.

2. Identify the SQL Database Projects extension by typing all or part of the name in
the extension search box. Select an available extension to view its details.
3. Select the extension you want and choose to Install it.

4. Select Reload to enable the extension (only required the first time you install an
extension).

5. Select the Projects icon from the activity bar.

7 Note

It is recommended to install the Schema Compare extension alongside the


SQL Database Projects extension for full functionality.

Visual Studio Code


The SQL Database Projects extension is installed with the mssql extension for Visual
Studio Code.

Dependencies
The SQL Database Projects extension has a dependency on the .NET SDK (required) and
AutoRest.Sql (optional).

.NET SDK
The .NET SDK is required for project build functionality and you are prompted to install
the .NET SDK if a supported version can't be detected by the extension. The .NET SDK
can be downloaded and installed for Windows, macOS, and Linux .

If you would like to check currently installed versions of the dotnet SDK, open a terminal
and run the following command:

.NET CLI

dotnet --list-sdks

After installing the .NET SDK, your environment is ready to use the SQL Database
Projects extension.

Common issues

Nuget.org missing from the list of sources may result in error messages such as:

error MSB4236: The SDK 'Microsoft.Build.Sql/0.1.9-preview' specified could not

be found.
Unable to find package Microsoft.Build.Sql. No packages exist with this id in

source(s): Microsoft Visual Studio Offline Packages

To check if nuget.org is registered as a source, run dotnet nuget list source from the
command line and review the results for an [Enabled] item referencing nuget.org. If
nuget.org is not registered as a source, run dotnet nuget add source
https://api.nuget.org/v3/index.json -n nuget.org .

Unsupported .NET SDK versions may result in error messages such as:

error MSB4018: The "SqlBuildTask" task failed unexpectedly.


error MSB4018: System.TypeInitializationException: The type initializer for

'SqlSchemaModelStaticState' threw an exception. --->


System.IO.FileNotFoundException: Could not load file or assembly

'System.Runtime, Version=4.2.2.0, Culture=neutral,

PublicKeyToken=b03f5f7f11d50a3a'. The system cannot find the file specified.


[c:\Users\ .sqlproj]_ (where the linked non-existing file has an unmatched

closing square bracket).

To force the SQL Database Projects extension to use the v6.x version of the .NET SDK
when multiple versions are installed, add a global.json file to the folder that contains the
SQL project.
AutoRest.Sql
The SQL extension for AutoRest is automatically downloaded and used by the SQL
Database Projects extension when a SQL project is generated from an OpenAPI
specification file.

Limitations
Currently, the SQL Database Project extension has the following limitations:

Tasks (build/publish) aren't user-defined.


SQLCLR objects in projects aren't supported.
Code analysis rules on projects aren't supported at this time.

Workspace
SQL database projects are contained within a logical workspace in Azure Data Studio
and Visual Studio Code. A workspace manages the folder(s) visible in the Explorer pane.
All SQL projects within the folders open in the current workspace are available in the
SQL Database Projects view by default.

You can manually add and remove projects from a workspace through the interface in
the Projects pane. The settings for a workspace can be manually edited in the .code-
workspace file, if necessary.

In the following example .code-workspace file, the folders array lists all folders included
in the Explorer pane and the dataworkspace.excludedProjects array within settings lists
all the SQL projects excluded from the Projects pane.

JSON

"folders": [

"path": "."
},

"name": "WideWorldImportersDW",

"path": "..\\WideWorldImportersDW"

],

"settings": {

"dataworkspace.excludedProjects": [

"AdventureWorksLT.sqlproj"

Next steps
Getting Started with the SQL Database Projects extension
Build and Publish a project with SQL Database Projects extension
SQL Server extension for Visual Studio
Code
Article • 04/03/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics

This article shows how to use the mssql extension for Visual Studio Code (Visual Studio
Code) to work with databases in SQL Server on Windows, macOS, and Linux, as well as
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. The
mssql extension for Visual Studio Code lets you connect to a SQL Server, query with
Transact-SQL (T-SQL), and view the results.

Create or open a SQL file


The mssql extension enables mssql commands and T-SQL IntelliSense in the code editor
when the language mode is set to SQL.

1. Select File > New File or press Ctrl+N. Visual Studio Code opens a new Plain Text
file by default.

2. Select Plain Text on the lower status bar, or press Ctrl+K > M, and select SQL from
the languages dropdown.

7 Note

If this is the first time you have used the extension, the extension installs the
SQL Tools Service in the background.

If you open an existing file that has a .sql file extension, the language mode is
automatically set to SQL.

Connect to SQL Server


Follow these steps to create a connection profile and connect to a SQL Server.

1. Press Ctrl+Shift+P or F1 to open the Command Palette.


2. Type sql to display the mssql commands, or type sqlcon, and then select MS SQL:
Connect from the dropdown.

7 Note

A SQL file, such as the empty SQL file you created, must have focus in the
code editor before you can execute the mssql commands.

3. Select the MS SQL: Manage Connection Profiles command.

4. Then select Create to create a new connection profile for your SQL Server.

5. Follow the prompts to specify the properties for the new connection profile. After
specifying each value, press Enter to continue.

Connection Description
property

Server name Specify the SQL Server instance name. Use localhost to connect to a SQL
or ADO Server instance on your local machine. To connect to a remote SQL Server,
connection enter the name of the target SQL Server, or its IP address. To connect to a
string SQL Server container, specify the IP address of the container's host
machine. If you need to specify a port, use a comma to separate it from
the name. For example, for a server listening on port 1401, enter
<servername or IP>,1401 .

By default, the connection string uses port 1433. A default instance of


SQL Server uses 1433 unless modified. If your instance is listening on
1433, you do not need to specify the port.

As an alternative, you can enter the ADO connection string for your
database here.
Connection Description
property

Database The database that you want to use. To connect to the default database,
name don't specify a database name here.
(optional)

Authentication Choose either Integrated or SQL Login.


Type

User name If you selected SQL Login, enter the name of a user with access to a
database on the server.

Password Enter the password for the specified user.

Save Password Press Enter to select Yes and save the password. Select No to be
prompted for the password each time the connection profile is used.

Profile Name Type a name for the connection profile, such as localhost profile.
(optional)

After you enter all values and select Enter, Visual Studio Code creates the
connection profile and connects to the SQL Server.

 Tip

If the connection fails, try to diagnose the problem from the error message in
the Output panel in Visual Studio Code. To open the Output panel, select
View > Output. Also review the connection troubleshooting
recommendations.

6. Verify your connection in the lower status bar.

As an alternative to the previous steps, you can also create and edit connection profiles
in the User Settings file (settings.json). To open the settings file, select File > Preferences
> Settings. For more information, see Manage connection profiles .

Encrypt and Trust server certificate


The mssql extension for VS Code v1.17.0 and later includes an important change to the
Encrypt property, which is now enabled (set to True) by default for MSSQL provider
connections, and SQL Server must be configured with TLS certificates signed by a
trusted root certificate authority. In addition, if an initial connection attempt fails with
encryption enabled (default), the mssql extension will provide a notification prompt with
an option to attempt the connection with Trust Server Certificate enabled. Both the
Encrypt and Trust server certificate properties are also available for manual editing in the
user settings file (settings.json). The best practice is to support a trusted encrypted
connection to the server.

For users connecting to Azure SQL Database, no changes to existing, saved connections
are needed; Azure SQL Database supports encrypted connections and is configured with
trusted certificates.

For users connecting to on-premises SQL Server, or SQL Server in a Virtual Machine, if
Encrypt is set to True, ensure that you have a certificate from a trusted certificate
authority (e.g. not a self-signed certificate). Alternatively, you may choose to connect
without encryption (Encrypt set to False), or to trust the server certificate (Encrypt set to
True and Trust server certificate set to True).

Create a database
1. In the new SQL file that you started earlier, type sql to display a list of editable
code snippets.
2. Select sqlCreateDatabase.

3. In the snippet, type TutorialDB to replace 'DatabaseName':

SQL

-- Create a new database called 'TutorialDB'

-- Connect to the 'master' database to run this snippet

USE master

GO

IF NOT EXISTS (

SELECT name

FROM sys.databases

WHERE name = N'TutorialDB'

CREATE DATABASE [TutorialDB]

GO

4. Press Ctrl+Shift+E to execute the Transact-SQL commands. View the results in the
query window.

 Tip
You can customize the shortcut keys for the mssql commands. See Customize
shortcuts .

Create a table
1. Delete the contents of the code editor window.

2. Press Ctrl+Shift+P or F1 to open the Command Palette.

3. Type sql to display the mssql commands, or type sqluse, and then select the MS
SQL: Use Database command.

4. Select the new TutorialDB database.

5. In the code editor, type sql to display the snippets, select sqlCreateTable, and then
press Enter.

6. In the snippet, type Employees for the table name.

7. Press Tab to get to the next field, and then type dbo for the schema name.

8. Replace the column definitions with the following columns:

SQL

EmployeesId INT NOT NULL PRIMARY KEY,

Name [NVARCHAR](50) NOT NULL,

Location [NVARCHAR](50) NOT NULL

9. Press Ctrl+Shift+E to create the table.

Insert and query


1. Add the following statements to insert four rows into the Employees table.

SQL
-- Insert rows into table 'Employees'

INSERT INTO Employees

([EmployeesId],[Name],[Location])

VALUES

( 1, N'Jared', N'Australia'),

( 2, N'Nikita', N'India'),

( 3, N'Tom', N'Germany'),

( 4, N'Jake', N'United States')

GO

-- Query the total count of employees

SELECT COUNT(*) as EmployeeCount FROM dbo.Employees;

-- Query all employee information


SELECT e.EmployeesId, e.Name, e.Location

FROM dbo.Employees as e

GO

While you type, T-SQL IntelliSense helps you to complete the statements:

 Tip

The mssql extension also has commands to help create INSERT and SELECT
statements. These were not used in the previous example.

2. Press Ctrl+Shift+E to execute the commands. The two result sets display in the
Results window.

View and save the result


1. Select View > Editor Layout > Flip Layout to switch to a vertical or horizontal split
layout.

2. Select the Results and Messages panel headers to collapse and expand the panels.

 Tip

You can customize the default behavior of the mssql extension. See
Customize extension options .

3. Select the maximize grid icon on the second result grid to zoom in to those results.

7 Note

The maximize icon displays when your T-SQL script produces two or more
result grids.

4. Open the grid context menu by right-clicking on the grid.


5. Select Select All.

6. Open the grid context menu again and select Save as JSON to save the result to a
.json file.

7. Specify a file name for the JSON file.

8. Verify that the JSON file saves and opens in Visual Studio Code.

If you need to save and run SQL scripts later, for administration or a larger development
project, save the scripts with a .sql extension.
Next steps
If you're new to T-SQL, see Tutorial: Write Transact-SQL statements and the
Transact-SQL Reference (Database Engine).
Develop for SQL databases in Visual Studio Code with the SQL Database Projects
extension
For more information on using or contributing to the mssql extension, see the
mssql extension project wiki .
For more information on using Visual Studio Code, see the Visual Studio Code
documentation .
Always Encrypted with secure enclaves
documentation
Find documentation about Always Encrypted with secure enclaves

Overview

e OVERVIEW

What is Always Encrypted with secure enclaves?

Configure and use Always Encrypted with secure enclaves

Set up in Azure SQL Database

p CONCEPT

Plan for secure enclaves in Azure SQL Database

Enable Always Encrypted with secure enclaves for your Azure SQL Database

Configure attestation for Always Encrypted using Azure Attestation

Set up in a SQL Server on Azure VM

p CONCEPT

Plan for Always Encrypted with secure enclaves without attestation

Plan for Host Guardian Service attestation

Deploy the Host Guardian Service for SQL Server

Register computer with Host Guardian Service

Configure the secure enclave in SQL Server

Samples and tutorials


s SAMPLE

Contoso HR web app sample

g TUTORIAL

Getting started using Always Encrypted with secure enclaves tutorials

Create and use indexes on enclave-enabled columns using randomized encryption

Develop a .NET application using Always Encrypted with secure enclaves

Develop a .NET Framework application using Always Encrypted with secure enclaves

Manage keys

c HOW-TO GUIDE

Manage keys for Always Encrypted with secure enclaves

Provision enclave-enabled keys

Rotate enclave-enabled keys

Configure columns

c HOW-TO GUIDE

Configure column encryption in-place using Always Encrypted with secure enclaves

Configure column encryption in-place with the Always Encrypted wizard in SSMS

Configure column encryption in-place with DACPAC

Configure column encryption in-place with PowerShell

Configure column encryption in-place with Transact-SQL

Enable Always Encrypted with secure enclaves for existing encrypted columns

Videos

q VIDEO
Inside Azure Datacenter Architecture with Mark Russinovich

A webinar including a section on secure enclaves

Query columns

c HOW-TO GUIDE

Run Transact-SQL statements using secure enclaves

Troubleshoot common issues for Always Encrypted with secure enclaves

Create and use indexes on columns using Always Encrypted with secure enclaves

Develop applications

c HOW-TO GUIDE

Develop applications using Always Encrypted with secure enclaves


Configure and use Always Encrypted
with secure enclaves
Article • 04/06/2023

Applies to:
SQL Server 2019 (15.x) and later - Windows only
Azure SQL
Database

Always Encrypted with secure enclaves extends the existing Always Encrypted feature to
enable richer functionality on sensitive data while keeping the data confidential. This
article lists common tasks for configuring and using the feature.

For tutorials that show you how to quickly get started with Always Encrypted with secure
enclaves, see:

Getting started using Always Encrypted with secure enclaves

Set up the secure enclave and attestation


Before you can use Always Encrypted with secure enclaves, you need to configure your
environment to ensure the secure enclave is available for the database. You may also
need to set up enclave attestation, if applicable.

The process for setting up your environment depends on whether you're using SQL
Server 2019 (15.x) or Azure SQL Database.

Set up the secure enclave and attestation in SQL Server


To set up Always Encrypted with secure enclaves without attestation, see:

Plan for Always Encrypted with secure enclaves in SQL Server without attestation
Configure the secure enclave in SQL Server

To set up Always Encrypted with secure enclaves and attestation, see:

Plan for Host Guardian Service attestation


Deploy the Host Guardian Service for SQL Server
Register computer with the Host Guardian Service
Configure the secure enclave in SQL Server
Set up the secure enclave and attestation in Azure SQL
Database
For details, see the following articles:

Plan for secure enclaves in Azure SQL Database


Enable Always Encrypted with secure enclaves for your Azure SQL Database
Configure Azure Attestation for your Azure SQL Database logical server

) Important

VBS enclaves in Azure SQL Database (in preview) currently do not support
attestation. Configuring Azure Attestation only applies to Intel SGX enclaves.

Manage keys for Always Encrypted with secure


enclaves
Manage keys for Always Encrypted with secure enclaves - overview
Provision enclave-enabled keys
Rotate enclave-enabled keys

Configure columns with Always Encrypted with


secure enclaves
Configure column encryption in-place using Always Encrypted with secure enclaves
- overview
Configure column encryption in-place with Transact-SQL
Configure column encryption in-place with PowerShell
Configure column encryption in-place with DAC Package
Enable Always Encrypted with secure enclaves for existing encrypted columns

Run Transact-SQL statements using secure


enclaves
Run Transact-SQL statements using secure enclaves
Troubleshoot common issues for Always Encrypted with secure enclaves
Create and use indexes on enclave-enabled
columns
Create and use indexes on columns using Always Encrypted with secure enclaves

Develop applications using Always Encrypted


with secure enclaves
Develop applications using Always Encrypted with secure enclaves

See also
Getting started using Always Encrypted with secure enclaves
Ledger overview
Article • 05/23/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

7 Note

Ledger in Azure SQL Managed Instance is currently in public preview.

Establishing trust around the integrity of data stored in database systems has been a
longstanding problem for all organizations that manage financial, medical, or other
sensitive data. The ledger feature provides tamper-evidence capabilities in your
database. You can cryptographically attest to other parties, such as auditors or other
business parties, that your data hasn't been tampered with.

Ledger helps protect data from any attacker or high-privileged user, including database
administrators (DBAs), system administrators, and cloud administrators. As with a
traditional ledger, the feature preserves historical data. If a row is updated in the
database, its previous value is maintained and protected in a history table. Ledger
provides a chronicle of all changes made to the database over time.

Ledger and the historical data are managed transparently, offering protection without
any application changes. The feature maintains historical data in a relational form to
support SQL queries for auditing, forensics, and other purposes. It provides guarantees
of cryptographic data integrity while maintaining the power, flexibility, and performance
of the SQL database.
Use cases for ledger
Let's go over some advantages for using ledger.

Streamlining audits
Any production system's value is based on the ability to trust the data that the system is
consuming and producing. If a malicious user has tampered with the data in your
database, that can have disastrous results in the business processes relying on that data.

Maintaining trust in your data requires a combination of enabling the proper security
controls to reduce potential attacks, backup and restore practices, and thorough
disaster recovery procedures. Audits by external parties ensure that these practices are
put in place.
Audit processes are highly time-intensive activities. Auditing requires on-site inspection
of implemented practices such as reviewing audit logs, inspecting authentication, and
inspecting access controls. Although these manual processes can expose potential gaps
in security, they can't provide attestable proof that the data hasn't been maliciously
altered.

Ledger provides the cryptographic proof of data integrity to auditors. This proof can
help streamline the auditing process. It also provides nonrepudiation regarding the
integrity of the system's data.

Multiple-party business processes


In some systems, such as supply-chain management systems, multiple organizations
must share state from a business process with one another. These systems struggle with
the challenge of how to share and trust data. Many organizations are turning to
traditional blockchains, such as Ethereum or Hyperledger Fabric, to digitally transform
their multiple-party business processes.

Blockchain is a great solution for multiple-party networks where trust is low between
parties that participate on the network. Many of these networks are fundamentally
centralized solutions where trust is important, but a fully decentralized infrastructure is a
heavyweight solution.

Ledger provides a solution for these networks. Participants can verify the integrity of the
centrally housed data, without the complexity and performance implications that
network consensus introduces in a blockchain network.

Customer success
Learn how Lenovo is reinforcing customer trust using ledger in Azure SQL
Database by watching this video .
RTGS.global using ledger in Azure SQL Database to establish trust with banks
around the world .
Qode Health Solutions secures COVID-19 vaccination records with the ledger
feature in Azure SQL Database

Trusted off-chain storage for blockchain


When a blockchain network is necessary for a multiple-party business process, the
ability to query the data on the blockchain without sacrificing performance is a
challenge.
Typical patterns for solving this problem involve replicating data from the blockchain to
an off-chain store, such as a database. But after the data is replicated to the database
from the blockchain, the data integrity guarantees that a blockchain offer is lost. Ledger
provides data integrity for off-chain storage of blockchain networks, which helps ensure
complete data trust through the entire system.

How it works
Any rows modified by a transaction in a ledger table is cryptographically SHA-256
hashed using a Merkle tree data structure that creates a root hash representing all rows
in the transaction. The transactions that the database processes are then also SHA-256
hashed together through a Merkle tree data structure. The result is a root hash that
forms a block. The block is then SHA-256 hashed through the root hash of the block,
along with the root hash of the previous block as input to the hash function. That
hashing forms a blockchain.

The root hashes in the database ledger, also called Database digests, contain the
cryptographically hashed transactions and represent the state of the database. They can
be periodically generated and stored outside the database in tamper-proof storage,
such as Azure Blob Storage configured with immutability policies, Azure Confidential
Ledger or on-premises Write Once Read Many (WORM) storage devices . Database
digests are later used to verify the integrity of the database by comparing the value of
the hash in the digest against the calculated hashes in database.

Ledger functionality is introduced to tables in two forms:

Updatable ledger tables, which allow you to update and delete rows in your tables.
Append-only ledger tables, which only allow insertions to your tables.

Both updatable ledger tables and append-only ledger tables provide tamper-evidence
and digital forensics capabilities.

Updatable ledger tables


Updatable ledger tables are ideal for application patterns that expect to issue updates
and deletions to tables in your database, such as system of record (SOR) applications.
Existing data patterns for your application don't need to change to enable ledger
functionality.

Updatable ledger tables track the history of changes to any rows in your database when
transactions that perform updates or deletions occur. An updatable ledger table is a
system-versioned table that contains a reference to another table with a mirrored
schema.

The other table is called the history table. The system uses this table to automatically
store the previous version of the row each time a row in the ledger table is updated or
deleted. The history table is automatically created when you create an updatable ledger
table.

The values in the updatable ledger table and its corresponding history table provide a
chronicle of the values of your database over time. A system-generated ledger view
joins the updatable ledger table and the history table so that you can easily query this
chronicle of your database.

For more information on updatable ledger tables, see Create and use updatable ledger
tables.

Append-only ledger tables


Append-only ledger tables are ideal for application patterns that are insert-only, such as
security information and event management (SIEM) applications. Append-only ledger
tables block updates and deletions at the API level. This blocking provides more
tampering protection from privileged users such as system administrators and DBAs.

Because only insertions are allowed into the system, append-only ledger tables don't
have a corresponding history table because there's no history to capture. As with
updatable ledger tables, a ledger view provides insights into the transaction that
inserted rows into the append-only table, and the user that performed the insertion.

For more information on append-only ledger tables, see Create and use append-only
ledger tables.

Ledger database
Ledger databases provide an easy solution for applications that require the integrity of
all data to be protected for the entire lifetime of the database. A ledger database can
only contain ledger tables. Creating regular tables (that are not ledger tables) is not
supported. Each table is, by default, created as an Updatable ledger table with default
settings, which makes creating such tables very easy. You configure a database as a
ledger database at creation. Once created, a ledger database cannot be converted to a
regular database. For more information, see Configure a ledger database.

Database digests
The hash of the latest block in the database ledger is called the database digest. It
represents the state of all ledger tables in the database at the time that the block was
generated.

When a block is formed, its associated database digest is published and stored outside
the database in tamper-proof storage. Because database digests represent the state of
the database at the time that they were generated, protecting the digests from
tampering is paramount. An attacker who has access to modify the digests would be
able to:

1. Tamper with the data in the database.


2. Generate the hashes that represent the database with those changes.
3. Modify the digests to represent the updated hash of the transactions in the block.

Ledger provides the ability to automatically generate and store the database digests in
immutable storage or Azure Confidential Ledger, to prevent tampering. Alternatively,
users can manually generate database digests and store them in the location of their
choice. Database digests are used for later verifying that the data stored in ledger tables
hasn't been tampered with.

Ledger verification
The ledger feature doesn't allow modifying the content of ledger system views, append-
only tables and history tables. However, an attacker or system administrator who has
control of the machine can bypass all system checks and directly tamper with the data.
For example, an attacker or system administrator can edit the database files in storage.
Ledger can't prevent such attacks but guarantees that any tampering will be detected
when the ledger data is verified.

The ledger verification process takes as input one or more previously generated
database digests and recomputes the hashes stored in the database ledger based on
the current state of the ledger tables. If the computed hashes don't match the input
digests, the verification fails, indicating that the data has been tampered with. Ledger
then reports all inconsistencies that it has detected.

Next steps
What is the database ledger
Create and use append-only ledger tables
Create and use updatable ledger tables
Enable automatic digest storage
Configure a ledger database
Verify a ledger table to detect tampering

See also
Bringing the power of blockchain to Azure SQL Database and SQL Server with
ledger | Data Exposed
What is the database ledger?
Article • 05/23/2023

Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance

The database ledger is part of the ledger feature. The database ledger incrementally
captures the state of a database as the database evolves over time, while updates occur
on ledger tables. It logically uses a blockchain and Merkle tree data structures.

Any operations that update a ledger table need to perform some additional tasks to
maintain the historical data and compute the digests captured in the database ledger.
Specifically, for every row updated, we must:

Persist the earlier version of the row in the history table.


Assign the transaction ID and generate a new sequence number, persisting them in
the appropriate system columns.
Serialize the row content and include it when computing the hash for all rows
updated by this transaction.

Ledger achieves that by extending the Data Manipulation Language (DML) query plans
of all insert, update and delete operations targeting ledger tables. The transaction ID
and newly generated sequence number are set for the new version of the row. Then, the
query plan operator executes a special expression that serializes the row content and
computes its hash, appending it to a Merkle Tree that is stored at the transaction level
and contains the hashes of all row versions updated by this transaction for this ledger
table. The root of the tree represents all the updates and deletes performed by this
transaction in this ledger table. If the transaction updates multiple tables, a separate
Merkle Tree is maintained for each table. The figure below shows an example of a
Merkle Tree storing the updated row versions of a ledger table and the format used to
serialize the rows. Other than the serialized value of each column, we include metadata
regarding the number of columns in the row, the ordinal of individual columns, the data
types, lengths and other information that affects how the values are interpreted.
To capture the state of the database, the database ledger stores an entry for every
transaction. It captures metadata about the transaction, such as its commit timestamp
and the identity of the user who executed it. It also captures the Merkle tree root of the
rows updated in each ledger table (see above). These entries are then appended to a
tamper-evident data structure to allow verification of integrity in the future. A block is
closed:

Approximately every 30 seconds, when your database is configured for automatic


database digest storage
When the user manually generates a database digest by running the
sys.sp_generate_database_ledger_digest stored procedure
When it contains 100K transactions.

When a block is closed, new transactions will be inserted in a new block. The block
generation process then:

1. Retrieves all transactions that belong to the closed block from both the in-memory
queue and the sys.database_ledger_transactions system catalog view.
2. Computes the Merkle tree root over these transactions and the hash of the
previous block.
3. Persists the closed block in the sys.database_ledger_blocks system catalog view.

Because this is a regular table update, the system automatically guarantees its durability.
To maintain the single chain of blocks, this operation is single-threaded. But it's also
efficient, because it only computes the hashes over the transaction information and
happens asynchronously. It doesn't affect the transaction performance.
For more information on how ledger provides data integrity, see the articles, Digest
management and Database verification.

Where are database transactions and block


data stored?
The data for transactions and blocks is physically stored as rows in two system catalog
views:

sys.database_ledger_transactions: Maintains a row with the information of each


transaction in the database ledger. The information includes the ID of the block
where this transaction belongs and the ordinal of the transaction within the block.
sys.database_ledger_blocks: Maintains a row for every block in the ledger,
including the root of the Merkle tree over the transactions within the block and the
hash of the previous block to form a blockchain.
To view the database ledger, run the following T-SQL statements in SQL Server
Management Studio, Azure Data Studio or SQL Server Developer Tools.

SQL

SELECT * FROM sys.database_ledger_transactions;


GO

SELECT * FROM sys.database_ledger_blocks;


GO

The following example of a ledger table consists of four transactions that made up one
block in the blockchain of the database ledger:

Permissions
Viewing the database ledger requires the VIEW LEDGER CONTENT permission. For details
on permissions related to ledger tables, see Permissions.

See also
Ledger overview
Data Manipulation Language (DML)
Ledger views
Updatable ledger tables
Article • 05/23/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

Updatable ledger tables are system-versioned tables on which users can perform
updates and deletes while also providing tamper-evidence capabilities. When updates
or deletes occur, all earlier versions of a row are preserved in a secondary table, known
as the history table. The history table mirrors the schema of the updatable ledger table.
When a row is updated, the latest version of the row remains in the ledger table, while
its earlier version is inserted into the history table by the system, transparently to the
application.

Both updatable ledger tables and temporal tables are system-versioned tables, for
which the database engine captures historical row versions in secondary history tables.
Either technology provides unique benefits. Updatable ledger tables make both the
current and historical data tamper evident. Temporal tables support querying the data
stored at any point in time instead of only the data that's correct at the current moment
in time. You can use both technologies together by creating tables that are both
updatable ledger tables and temporal tables.
You can create an updatable ledger table by specifying the LEDGER = ON argument in
your CREATE DATABASE (Transact-SQL) statement.

 Tip

LEDGER = ON is optional when creating updatable ledger tables in a ledger

database. By default, each table is an updatable ledger table in a ledger database.

For information on options available when you specify the LEDGER argument in your T-
SQL statement, see CREATE TABLE (Transact-SQL).

) Important

After a ledger table is created, it can't be reverted to a table that isn't a ledger
table. As a result, an attacker can't temporarily remove ledger capabilities on a
ledger table, make changes, and then reenable ledger functionality.
Updatable ledger table schema
An updatable ledger table needs to have the following GENERATED ALWAYS columns
that contain metadata noting which transactions made changes to the table and the
order of operations by which rows were updated by the transaction. This data is useful
for forensics purposes in understanding how data was inserted over time.

If you don't specify the required GENERATED ALWAYS columns of the ledger table and
ledger history table in the CREATE TABLE (Transact-SQL) statement, the system
automatically adds the columns and uses the following default names. For more
information, see examples in Creating an updatable ledger table.

Default column name Data Description


type

ledger_start_transaction_id bigint The ID of the transaction that created a row version

ledger_end_transaction_id bigint The ID of the transaction that deleted a row version

ledger_start_sequence_number bigint The sequence number of an operation within a


transaction that created a row version

ledger_end_sequence_number bigint The sequence number of an operation within a


transaction that deleted a row version

History table
The history table is automatically created when an updatable ledger table is created. The
history table captures the historical values of rows changed because of updates and
deletes in the updatable ledger table. The schema of the history table mirrors that of the
updatable ledger table it's associated with.

When you create an updatable ledger table, you can either specify the name of the
schema to contain your history table and the name of the history table or you have the
system generate the name of the history table and add it to the same schema as the
ledger table. History tables with system-generated names are called anonymous history
tables. The naming convention for an anonymous history table is <schema> .
<updatableledgertablename> .MSSQL_LedgerHistoryFor_ <GUID> .

Ledger view
For every updatable ledger table, the system automatically generates a view, called the
ledger view. The ledger view is a join of the updatable ledger table and its associated
history table. The ledger view reports all row modifications that have occurred on the
updatable ledger table by joining the historical data in the history table. This view
enables users, their partners, or auditors to analyze all historical operations and detect
potential tampering. Each row operation is accompanied by the ID of the acting
transaction, along with whether the operation was a DELETE or an INSERT . Users can
retrieve more information about the time the transaction was executed and the identity
of the user who executed it and correlate it to other operations performed by this
transaction.

For example, if you want to track transaction history for a banking scenario, the ledger
view provides a chronicle of transactions over time. By using the ledger view, you don't
have to independently view the updatable ledger table and history tables or construct
your own view to do so.

For an example of using the ledger view, see Create and use updatable ledger tables.

The ledger view's schema mirrors the columns defined in the updatable ledger and
history table, but the GENERATED ALWAYS columns are different than those of the
updatable ledger and history tables.

Ledger view schema

7 Note

The ledger view column names can be customized when you create the table by
using the <ledger_view_option> parameter with the CREATE TABLE (Transact-SQL)
statement. For more information, see ledger view options and the corresponding
examples in CREATE TABLE (Transact-SQL).

Default column name Data type Description

ledger_transaction_id bigint The ID of the transaction that created or deleted a


row version.

ledger_sequence_number bigint The sequence number of a row-level operation


within the transaction on the table.
Default column name Data type Description

ledger_operation_type tinyint Contains 1 (INSERT) or 2 (DELETE). Inserting a row


into the ledger table produces a new row in the
ledger view that contains 1 in this column.
Deleting a row from the ledger table produces a
new row in the ledger view that contains 2 in this
column. Updating a row in the ledger table
produces two new rows in the ledger view. One
row contains 2 (DELETE), and the other row
contains 1 (INSERT) in this column.

ledger_operation_type_desc nvarchar(128) Contains INSERT or DELETE . For more information,


see the preceding row.

Next steps
Create and use updatable ledger tables
Create and use append-only ledger tables
How to migrate data from regular tables to ledger tables
Append-only ledger tables
Article • 02/28/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

Append-only ledger tables allow only INSERT operations on your tables, which ensure
that privileged users such as database administrators can't alter data through traditional
Data Manipulation Language operations. Append-only ledger tables are ideal for
systems that don't update or delete records, such as security information event and
management systems or blockchain systems where data needs to be replicated from the
blockchain to a database. Because there are no UPDATE or DELETE operations on an
append-only table, there's no need for a corresponding history table as there is with
updatable ledger tables.
You can create an append-only ledger table by specifying the LEDGER = ON argument in
your CREATE TABLE (Transact-SQL) statement and specifying the APPEND_ONLY = ON
option.

) Important

After a table is created as a ledger table, it can't be reverted to a table that doesn't
have ledger functionality. As a result, an attacker can't temporarily remove ledger
capabilities, make changes to the table, and then reenable ledger functionality.

Append-only ledger table schema


An append-only table needs to have the following GENERATED ALWAYS columns that
contain metadata noting which transactions made changes to the table and the order of
operations by which rows were updated by the transaction. When you create an
append-only ledger table, GENERATED ALWAYS columns will be created in your ledger
table. This data is useful for forensics purposes in understanding how data was inserted
over time.

If you don't specify the definitions of the GENERATED ALWAYS columns in the CREATE
TABLE statement, the system automatically adds them by using the following default
names.

Default column name Data Description


type

ledger_start_transaction_id bigint The ID of the transaction that created a row version

ledger_start_sequence_number bigint The sequence number of an operation within a


transaction that created a row version

Ledger view
For every append-only ledger table, the system automatically generates a view, called
the ledger view. The ledger view reports all row inserts that have occurred on the table.
The ledger view is primarily helpful for updatable ledger tables, rather than append-only
ledger tables, because append-only ledger tables don't have any UPDATE or DELETE
capabilities. The ledger view for append-only ledger tables is available for consistency
between both updatable and append-only ledger tables.
Ledger view schema

7 Note

The ledger view column names can be customized when you create the table by
using the <ledger_view_option> parameter with the CREATE TABLE (Transact-SQL)
statement. For more information, see ledger view options and the corresponding
examples in CREATE TABLE (Transact-SQL).

Default column name Data type Description

ledger_transaction_id bigint The ID of the transaction that created or deleted a


row version.

ledger_sequence_number bigint The sequence number of a row-level operation


within the transaction on the table.

ledger_operation_type tinyint Contains 1 (INSERT) or 2 (DELETE). Inserting a row


into the ledger table produces a new row in the
ledger view that contains 1 in this column.
Deleting a row from the ledger table produces a
new row in the ledger view that contains 2 in this
column. Updating a row in the ledger table
produces two new rows in the ledger view. One
row contains 2 (DELETE), and the other row
contains 1 (INSERT) in this column. A DELETE
shouldn't occur on an append-only ledger table.

ledger_operation_type_desc nvarchar(128) Contains INSERT or DELETE . For more information,


see the preceding row.

Next steps
Create and use append-only ledger tables
Create and use updatable ledger tables
How to migrate data from regular tables to ledger tables
Digest management
Article • 05/23/2023

Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance

Database digests
The hash of the latest block in the database ledger is called the database digest. It
represents the state of all ledger tables in the database at the time when the block was
generated. Generating a database digest is efficient, because it involves computing only
the hashes of the blocks that were recently appended.

Database digests can be generated either automatically by the system or manually by


the user. You can use them later to verify the integrity of the database.

Database digests are generated in the form of a JSON document that contains the hash
of the latest block, together with metadata for the block ID. The metadata includes the
time that the digest was generated and the commit time stamp of the last transaction in
this block.

The verification process and the integrity of the database depend on the integrity of the
input digests. For this purpose, database digests that are extracted from the database
need to be stored in trusted storage that the high-privileged users or attackers of the
database can't tamper with.

Automatic generation and storage of database digests

7 Note

Automatic generation and storage of database digests in SQL Server only supports
Azure Storage accounts.

Ledger integrates with the immutable storage feature of Azure Blob Storage and Azure
Confidential Ledger. This integration provides secure storage services in Azure to help
protect the database digests from potential tampering. This integration provides a
simple and cost-effective way for users to automate digest management without having
to worry about their availability and geographic replication. Azure Confidential Ledger
has a stronger integrity guarantee for customers who might be concerned about
privileged administrators access to the digest. This table compares the immutable
storage feature of Azure Blob Storage with Azure Confidential Ledger.

You can configure automatic generation and storage of database digests through the
Azure portal, PowerShell, or the Azure CLI. For more information, see Enable automatic
digest storage. When you configure automatic generation and storage, database digests
are generated on a predefined interval of 30 seconds and uploaded to the selected
storage service. If no transactions occur on the system in the 30-second interval, a
database digest won't be generated and uploaded. This mechanism ensures that
database digests are generated only when data has been updated in your database.
When the endpoint is an Azure Blob Storage, the logical server for Azure SQL Database
or Azure SQL Managed Instance creates a new container, named sqldbledgerdigests
and uses a naming pattern like: ServerName/DatabaseName/CreationTime . The creation
time is needed because a database with the same name can be dropped and recreated
or restored, allowing for different incarnations of the database under the same name.
For more information, see Digest Management Considerations.

7 Note

For SQL Server, the container needs to be created manually by the user.

Azure Storage Account Immutability Policy

If you use an Azure Storage account for the storage of the database digests, configure
an immutability policy on your container after provisioning to ensure that database
digests are protected from tampering. Make sure the immutability policy allows
protected append writes to append blobs and that the policy is locked.

Azure Storage account permission


If you use Azure SQL Database or Azure SQL Managed Instance, make sure that your
logical server or managed instance (System Identity) has sufficient RBAC permissions to
write digests by adding it to the Storage Blob Data Contributor role. In case you use
Active geo-replication or auto-failover groups make sure that the secondary replica(s)
have the same RBAC permission on the Azure Storage account.

If you use SQL Server, you have to create a shared access signature (SAS) on the digest
container to allow SQL Server to connect and authenticate against the Azure Storage
account.

Create a container on the Azure Storage account, named sqldbledgerdigests.


Create a policy on a container with the Read, Add, Create, Write, and List
permissions, and generate a shared access signature key.
For the sqldbledgerdigests container used for digest file storage, create a SQL
Server credential whose name matches the container path.

The following example assumes that an Azure Storage container, a policy, and a SAS key
have been created. This is needed by SQL Server to access the digest files in the
container.

In the following code snippet, replace <your SAS key> with the SAS key. The SAS key
looks like 'sr=c&si=<MYPOLICYNAME>&sig=<THESHAREDACCESSSIGNATURE>' .

SQL

CREATE CREDENTIAL
[https://ledgerstorage.blob.core.windows.net/sqldbledgerdigests]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = '<your SAS key>'

Azure Confidential Ledger Permission

If you use Azure SQL Database or Azure SQL Managed Instance, make sure that your
logical server or managed instance (System Identity) has sufficient permissions to write
digests by adding it to the Contributor role. To do this, follow the steps for Azure
Confidential Ledger user management.

7 Note

Automatic generation and storage of database digests in SQL Server only supports
Azure Storage accounts.

Manual generation and storage of database digests


You can also generate a database digest on demand so that you can manually store the
digest in any service or device that you consider a trusted storage destination. For
example, you might choose an on-premises write once, read many (WORM) device as a
destination. You manually generate a database digest by running the
sys.sp_generate_database_ledger_digest stored procedure in either SQL Server
Management Studio or Azure Data Studio.

SQL
EXECUTE sp_generate_database_ledger_digest;

The returned result set is a single row of data. It should be saved to the trusted storage
location as a JSON document as follows:

JSON

{
"database_name": "ledgerdb",
"block_id": 0,
"hash":
"0xDC160697D823C51377F97020796486A59047EBDBF77C3E8F94EEE0FFF7B38A6A",
"last_transaction_commit_time": "2020-11-12T18:01:56.6200000",
"digest_time": "2020-11-12T18:39:27.7385724"
}

Permissions

Generating database digests requires the GENERATE LEDGER DIGEST permission. For
details on permissions related to ledger tables, see Permissions.

Digest management considerations

Database restore
Restoring the database back to an earlier point in time, also known as Point in Time
Restore, is an operation frequently used when a mistake occurs and users need to
quickly revert the state of the database back to an earlier point in time. When uploading
the generated digests to Azure Storage or Azure Confidential Ledger, the create time of
the database is captured that these digests map to. Every time the database is restored,
it's tagged with a new create time and this technique allows us to store the digests
across different "incarnations" of the database. For SQL Server, the create time is the
current UTC time when the digest upload is enabled for the first time. Ledger preserves
the information regarding when a restore operation occurred, allowing the verification
process to use all the relevant digests across the various incarnations of the database.
Additionally, users can inspect all digests for different create times to identify when the
database was restored and how far back it was restored to. Since this data is written in
immutable storage, this information will be protected as well.

7 Note
Ledger in Azure SQL Managed Instance is currently in public preview. If you
perform a native restore of a database backup, you need to change the digest path
manually using the Azure Portal, PowerShell or the Azure CLI.

Active geo-replication and Always On availability groups


Active geo-replication or auto-failover groups can be configured for Azure SQL
Database or Azure SQL Managed Instance. Replication across geographic regions is
asynchronous for performance reasons and, thus, allows the secondary database to be
slightly behind compared to the primary. In the event of a geographic failover, any latest
data that hasn't yet been replicated is lost. Ledger will only issue database digests for
data that has been replicated to geographic secondaries to guarantee that digests will
never reference data that might be lost in case of a geographic failover. This only
applies for automatic generation and storage of database digests. In a failover group,
both primary and secondary database will have the same digest path. Even when you
perform a failover, the digest path doesn't change for both primary and secondary
database.

If failover group is deleted or you drop the link, both databases will behave as primary
databases. At that point the digest path of the previous secondary database will change
and we will add a folder RemovedSecondaryReplica to the path.

When your database is part of an Always On availability group in SQL Server, the same
principle as active geo-replication is used. The upload of the digests is only done if all
transactions have been replicated to the secondary replicas.

7 Note

Ledger in Azure SQL Managed Instance is currently in public preview. The Managed
Instance link feature is not supported.

Next steps
Ledger overview
Enable automatic digest storage
sys.sp_generate_database_ledger_digest
Database verification
Article • 05/24/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

Ledger provides a form of data integrity called forward integrity, which provides
evidence of data tampering on data in your ledger tables. The database verification
process takes as input one or more previously generated database digests. It then
recomputes the hashes stored in the database ledger based on the current state of the
ledger tables. If the computed hashes don't match the input digests, the verification
fails. The failure indicates that the data has been tampered with. The verification process
reports all inconsistencies that it detects.

Database verification process


The verification process scans all ledger and history tables. It recomputes the SHA-256
hashes of their rows and compares them against the database digest files passed to the
verification stored procedure.

Because the ledger verification recomputes all of the hashes for transactions in the
database, it can be a resource-intensive process for databases with large amounts of
data. To reduce the cost of verification, the feature exposes options to verify individual
ledger tables or only a subset of the ledger tables.

You accomplish database verification through two stored procedures, depending on


whether you use automatic digest storage or you manually manage digests.

7 Note

The database option ALLOW_SNAPSHOT_ISOLATION has to be enabled on the


database before you can run the verifcation stored procedures.

Database verification that uses automatic digest storage


When you're using automatic digest storage for generating and storing database
digests, the location of the digest storage is in the system catalog view
sys.database_ledger_digest_locations as JSON objects. Running database verification
consists of executing the sp_verify_database_ledger_from_digest_storage system stored
procedure. Specify the JSON objects from the sys.database_ledger_digest_locations
system catalog view where database digests are configured to be stored.

When you use automatic digest storage, you can change storage locations throughout
the lifecycle of the ledger tables. For example, if you start by using Azure immutable
storage to store your digest files, but later you want to use Azure Confidential Ledger
instead, you can do so. This change in location is stored in
sys.database_ledger_digest_locations.

When you run ledger verification, inspect the location of digest_locations to ensure
digests used in verification are retrieved from the locations you expect. You want to
make sure that a privileged user hasn't changed locations of the digest storage to an
unprotected storage location, such as Azure Storage, without a configured and locked
immutability policy.

To simplify running verification when you use multiple digest storage locations, the
following script will fetch the locations of the digests and execute verification by using
those locations.

SQL

DECLARE @digest_locations NVARCHAR(MAX) = (SELECT * FROM


sys.database_ledger_digest_locations FOR JSON AUTO, INCLUDE_NULL_VALUES);

SELECT @digest_locations as digest_locations;

BEGIN TRY

EXEC sys.sp_verify_database_ledger_from_digest_storage
@digest_locations;

SELECT 'Ledger verification succeeded.' AS Result;

END TRY

BEGIN CATCH

THROW;

END CATCH

Database verification that uses manual digest storage


When you're using manual digest storage for generating and storing database digests,
the stored procedure sp_verify_database_ledger is used to verify the ledger database.
The JSON content of the digest is appended in the stored procedure. When you're
running database verification, you can choose to verify all tables in the database or
verify specific tables.

The following code is an example of running the sp_verify_database_ledger stored


procedure by passing two digests for verification:

SQL
EXECUTE sp_verify_database_ledger N'

"database_name": "ledgerdb",

"block_id": 0,

"hash":
"0xDC160697D823C51377F97020796486A59047EBDBF77C3E8F94EEE0FFF7B38A6A",

"last_transaction_commit_time": "2020-11-12T18:01:56.6200000",

"digest_time": "2020-11-12T18:39:27.7385724"

},

"database_name": "ledgerdb",

"block_id": 1,

"hash":
"0xE5BE97FDFFA4A16ADF7301C8B2BEBC4BAE5895CD76785D699B815ED2653D9EF8",

"last_transaction_commit_time": "2020-11-12T18:39:35.6633333",

"digest_time": "2020-11-12T18:43:30.4701575"

]';

Return codes for sp_verify_database_ledger and


sp_verify_database_ledger_from_digest_storage are 0 (success) or 1 (failure).

Recommendation
Ideally, you want to minimize or even eliminate the gap between the time the attack
occurred and the time it was detected. Microsoft recommends scheduling the ledger
verification] regularly to avoid a restore of the database from days or months ago after
tampering was detected. The interval of the verification should be decided by the
customer, but be aware that ledger verification can be resource consuming. We
recommend running this during a maintenance window or off peak hours.

Scheduling database verification in Azure SQL Database can be done with Elastic Jobs or
Azure Automation. For scheduling the database verification in Azure SQL Managed
Instance and SQL Server, you can use SQL Server Agent.

7 Note

Ledger in Azure SQL Managed Instance is currently in public preview.

Permissions
Database verification requires the VIEW LEDGER CONTENT permission. For details on
permissions related to ledger tables, see Permissions.
Next steps
Ledger overview
Verify a ledger table to detect tampering
sys.database_ledger_digest_locations
sp_verify_database_ledger_from_digest_storage
sp_verify_database_ledger
Monitor digest uploads
Article • 05/23/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

You can monitor failed and successful ledger digest uploads in the Azure portal in the
Metrics view of your Azure SQL Database.

Digest upload alerts recommendation


We recommend you configure alerts on failed ledger digest uploads if you want to be
notified when a digest upload failed. Failures might occur due to revoked permissions
on the storage account or network configuration that makes the storage account
inaccessible. Optionally, you could also configure an alert on successful ledger digest
uploads. If the number of successful ledger digest uploads drops under a certain value
or zero due to someone disabling the automatic digest upload, the alert can help raise
attention to this matter. Digest uploads that are explicitly disabled wouldn't be
considered a failure in this case.

Next steps
Ledger overview
Enable automatic digest storage
Recover ledger database after
tampering
Article • 05/24/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

How to recover after tampering occurs?


The most straightforward way to repair any kind of tampering is to restore the database
to the latest backup that can be verified. To achieve that, you can restore the database
to different points in time and verify the ledger using earlier database digests. The latest
point in time that can be verified successfully is the one that is guaranteed to not be
tampered with and can be used to continue transactions processing. For this reason, it's
critical for backups to be frequent enough to get as close as possible to the time of
tampering. Backup scheduling is automatically done for Azure SQL Database. Although
this technique is simple, it has an important caveat: if any transactions were executed
after the tampering occurred, you need to accept that these transactions will be lost or
they need to manually repair the ledger table by reinserting the information for the
verified transactions and recomputing the hashes for these new transactions that
occurred after the first tampering event in the database ledger.

Tampering categories
Depending on the type of tampering, there are cases where you can repair the ledger
without losing data. You should consider two categories of tampering events.

Tampering didn't affect further transactions


The tampering event modified some data stored in the ledger but didn't affect any
further transactions. This might be because the attack was detected before any
transactions would operate over the tampered data or because the attack only affected
data in a way that doesn't affect new transactions. For example, if you use a ledger table
to store banking transaction details, tampering with details of existing transactions
doesn't impact new transactions, which will work over the current balances.

Since the tampering didn't affect any transactions that occurred after the tampering
event, the new transaction execution and generated results are correct. Based on that,
you should ideally bring the ledger to a consistent state without affecting these
transactions.

If the attacker didn't tamper with the database level ledger, this is easy to detect and
repair. The database ledger is in a consistent state with all database digests generated,
and any new transactions appended to it have been hashed using the valid hashes of
earlier transactions. Based on that, any database digests that were generated, even for
transactions after the tampering occurred, are still valid. You can attempt to retrieve the
correct table ledger payload for the tampered transactions from backups that can still
be validated to be secure (using the ledger validation on them) and repair the
operational database by overwriting the tampered data in the table ledger. This will
create a new transaction recording the repairing transactions.

Tampering affected data used by further transactions


The tampering event affected data that was later used by further transactions, therefore
affecting their execution. For example, in a banking application where the current
account balances are stored in a ledger table, modifying the current state of the table
can be disastrous for further transactions since it can allow new transactions to
overspend.

If the attacker tampered with the database ledger, recomputing the hashes of blocks to
make it internally consistent (until verified against external database digests), then new
transactions and database digests will be generated over invalid hashes. This leads to a
fork in the ledger, since the new database digests generated map to an invalid state and
even if you repair the ledger by using earlier backups, all these database digests are
permanently invalid. Additionally, since the database ledger is broken, you can't trust
the details about transactions that occurred after tampering until you verify them. Based
on that, the tampering can be potentially reverted by:

1. Using backups to restore the state for the tampered transactions.


2. Verifying the portion of the ledger after the last transaction recovered by the
backup and until the end of the ledger. For this, you have to use the database
digests from the forked part of the chain. Although the database digests don't
match the original part of the ledger, it can still verify the forked portion of the
ledger hasn't been tampered with. If these also indicate tampering, this means that
there have been more than one tampering events and the process needs to be
applied recursively to recover the different portions of the ledger from backups.
3. Manually repair the table ledgers by reinserting the information for the verified
transactions and recomputing the hashes for these new transactions that occurred
after the first tampering event in the database ledger.
See also
Database ledger
Verify a ledger table to detect tampering
sys.database_ledger_digest_locations
sp_verify_database_ledger_from_digest_storage
sp_verify_database_ledger
Ledger considerations and limitations
Article • 05/23/2023

Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance

There are some considerations and limitations to be aware of when working with ledger
tables due to the nature of system-versioning and immutable data.

7 Note

Ledger in Azure SQL Managed Instance is currently in public preview.

General considerations and limitations


Consider the following when working with ledger.

A ledger database, a database with the ledger property set to on, can't be
converted to a regular database, with the ledger property set to off.
Automatic generation and storage of database digests is currently available in
Azure SQL Database, but not supported on SQL Server.
Automated digest management with ledger tables by using Azure Storage
immutable blobs doesn't offer the ability for users to use locally redundant storage
(LRS) accounts.
When a ledger database is created, all new tables created by default (without
specifying the APPEND_ONLY = ON clause) in the database will be updatable ledger
tables. To create append-only ledger tables, use the APPEND_ONLY = ON clause in the
CREATE TABLE (Transact-SQL) statements.
A transaction can update up to 200 ledger tables.

Ledger table considerations and limitations


Existing tables in a database that aren't ledger tables can't be converted to ledger
tables. For more information, see Migrate data from regular tables to ledger tables.
After a ledger table is created, it can't be reverted to a table that isn't a ledger
table.
Deleting older data in append-only ledger tables or the history table of updatable
ledger tables isn't supported.
TRUNCATE TABLE isn't supported.
When an updatable ledger table is created, it adds four GENERATED ALWAYS
columns to the ledger table. An append-only ledger table adds two columns to the
ledger table. These new columns count against the maximum supported number
of columns in Azure SQL Database (1,024).
In-memory tables aren't supported.
Sparse column sets aren't supported.
SWITCH IN/OUT partition isn't supported.
DBCC CLONEDATABASE isn't supported.
Ledger tables can't have full-text indexes.
Ledger tables can't be graph table.
Ledger tables can't be FileTables.
Ledger tables can't have a rowstore non-clustered index when they have a
clustered columnstore index.
Change tracking isn't allowed on the history table but is allowed on ledger tables.
Change data capture isn't supported for ledger tables.
Transactional replication isn't supported for ledger tables.
Database mirroring isn't supported.
Azure Synapse Link is supported but only for the ledger table, not the history table.
The Managed Instance link feature is not supported.
Change the digest path manually after a native restore of a database backup to an
Azure SQL managed instance.

Unsupported data types


XML
SqlVariant
User-defined data type
FILESTREAM

Temporal table limitations


Updatable ledger tables are based on the technology of temporal tables and inherits
most of the limitations but not all of them. Below is a list of limitations that is inherited
from temporal tables.

If the name of a history table is specified during history table creation, you must
specify the schema and table name and also the name of the ledger view.
By default, the history table is PAGE compressed.
If the current table is partitioned, the history table is created on the default file
group because partitioning configuration isn't replicated automatically from the
current table to the history table.
Temporal and history tables can't be a FILETABLE and can contain columns of any
supported datatype other than FILESTREAM. FILETABLE and FILESTREAM allow data
manipulation outside of SQL Server, and thus system versioning can't be
guaranteed.
A node or edge table can't be created as or altered to a temporal table. Graph isn't
supported with ledger.
While temporal tables support blob data types, such as (n)varchar(max) ,
varbinary(max) , (n)text , and image , they'll incur significant storage costs and

have performance implications due to their size. As such, when designing your
system, care should be taken when using these data types.
The history table must be created in the same database as the current table.
Temporal querying over Linked Server isn't supported.
The history table can't have constraints (Primary Key, Foreign Key, table, or column
constraints).
Online option ( WITH (ONLINE = ON ) has no effect on ALTER TABLE ALTER COLUMN in
case of system-versioned temporal table. ALTER COLUMN isn't performed as online
regardless of which value was specified for the ONLINE option.
INSERT and UPDATE statements can't reference the GENERATED ALWAYS columns.

Attempts to insert values directly into these columns will be blocked.


UPDATETEXT and WRITETEXT aren't supported.

Triggers on the history table aren't allowed.


Usage of replication technologies is limited:
Always On: Fully supported
Snapshot, merge and transactional replication: Not supported for temporal
tables
A history table can't be configured as current table in a chain of history tables.
The following objects or properties aren't replicated from the current table to the
history table when the history table is created:
Period definition
Identity definition
Indexes
Statistics
Check constraints
Triggers
Partitioning configuration
Permissions
Row-level security predicates

Schema changes consideration


Adding columns
Adding nullable columns is supported. Adding non-nullable columns is not supported.
Ledger is designed to ignore NULL values when computing the hash of a row version.
Based on that, when a nullable column is added, ledger will modify the schema of the
ledger and history tables to include the new column, however, this doesn't impact the
hashes of existing rows. Adding columns in ledger tables is captured in
sys.ledger_column_history.

Dropping columns and tables


Normally, dropping a column or table completely erases the underlying data from the
database and is fundamentally incompatible with the ledger functionality that requires
data to be immutable. Instead of deleting the data, ledger simply renames the objects
being dropped so that they're logically removed from the user schema, but physically
remain in the database. Any dropped columns are also hidden from the ledger table
schema, so that they're invisible to the user application. However, the data of such
dropped objects remains available for the ledger verification process, and allows users
to inspect any historical data through the corresponding ledger views. Dropping
columns in ledger tables is captured in sys.ledger_column_history. Dropping a ledger
table is captured in sys.ledger_table_history. Dropping ledger tables and its dependent
objects are marked as dropped in system catalog views and renamed:

Dropped ledger tables are marked as dropped by setting is_dropped_ledger_table


in sys.tables and renamed using the following format:
MSSQL_DroppedLedgerTable_<dropped_ledger_table_name>_<GUID> .

Dropped history tables for updatable ledger tables are renamed using the
following format:
MSSQL_DroppedLedgerHistory_<dropped_history_table_name>_<GUID> .

Dropped ledger views are marked as dropped by setting is_dropped_ledger_view


in sys.views and renamed using the following format:
MSSQL_DroppedLedgerView_<dropped_ledger_view_name>_<GUID> .

7 Note

The name of dropped ledger tables, history tables and ledger views might be
truncated if the length of the renamed table or view exceeds 128 characters.

Altering Columns
Any changes that don't impact the underlying data of a ledger table are supported
without any special handling as they don't impact the hashes being captured in the
ledger. These changes includes:

Changing nullability
Collation for Unicode strings
The length of variable length columns

However, any operations that might affect the format of existing data, such as changing
the data type aren't supported.

Next steps
Ledger overview
Updatable ledger tables
Append-only ledger tables
Database ledger
Configure and manage content
reference - Azure SQL Database
Article • 02/07/2023

Applies to: Azure SQL Database

In this article you can find a content reference of various guides, scripts, and
explanations that can help you to manage and configure your Azure SQL Database.

Load data
Migrate to SQL Database
Learn how to manage SQL Database after migration.
Copy a database
Import a DB from a BACPAC
Export a DB to BACPAC
Load data with BCP
Load data with ADF

Configure features
Configure Azure Active Directory (Azure AD) auth
Configure Conditional Access
Azure AD Multi-Factor Authentication
Configure backup retention for a database to keep your backups on Azure Blob
Storage.
Configure geo-replication to keep a replica of your database in another region.
Configure auto-failover group to automatically fail over a group of single or
pooled databases to a secondary server in another region in the event of a
disaster.
Configure temporal retention policy
Configure TDE with BYOK
Rotate TDE BYOK keys
Remove TDE protector
Configure In-Memory OLTP
Configure Azure Automation
Configure transactional replication to replicate your date between databases.
Configure threat detection to let Azure SQL Database identify suspicious activities
such as SQL Injection or access from suspicious locations.
Configure dynamic data masking to protect your sensitive data.
Configure security for geo-replicas.

Monitor and tune your database


Manual tuning
Use DMVs to monitor performance
Use Query store to monitor performance
Enable automatic tuning to let Azure SQL Database optimize performance of your
workload.
Enable e-mail notifications for automatic tuning to get information about tuning
recommendations.
Apply performance recommendations and optimize your database.
Create alerts to get notifications from Azure SQL Database.
Troubleshoot connectivity if you notice some connectivity issues between the
applications and the database. You can also use Resource Health for connectivity
issues.
Troubleshoot performance with Intelligent Insights
Manage file space to monitor storage usage in your database.
Use Intelligent Insights diagnostics log
Monitor In-memory OLTP space

Extended events
Extended events
Store Extended events into event file
Store Extended events into ring buffer

Query distributed data


Query vertically partitioned data across multiple databases.
Report across scaled-out data tier.
Query across tables with different schemas.

Data sync
SQL Data Sync
Data Sync Agent
Replicate schema changes
Monitor with OMS
Best practices for Data Sync
Troubleshoot Data Sync

Elastic Database jobs


Create and manage Elastic Database Jobs using PowerShell.
Create and manage Elastic Database Jobs using Transact-SQL.

Database sharding
Upgrade elastic database client library.
Create sharded app.
Query horizontally sharded data.
Run Multi-shard queries.
Move sharded data.
Configure security in database shards.
Add a shard to the current set of database shards.
Fix shard map problems.
Migrate sharded DB.
Create counters.
Use entity framework to query sharded data.
Use Dapper framework to query sharded data.

Develop applications
Connectivity
Use Spark Connector
Authenticate app
Use batching for better performance
Connectivity guidance
DNS aliases
Setup DNS alias PowerShell
Ports - ADO.NET
C and C ++
Excel

Design applications
Design for disaster recovery
Design for elastic pools
Design for app upgrades

Design Multi-tenant software as a service (SaaS)


applications
SaaS design patterns
SaaS video indexer
SaaS app security

Next steps
Learn more about How-to guides for Azure SQL Managed Instance
Quickstart: Use Azure Data Studio to
connect and query Azure SQL Database
Article • 05/10/2023

In this quickstart, you'll use Azure Data Studio to connect to an Azure SQL Database
server. You'll then run Transact-SQL (T-SQL) statements to create and query the
TutorialDB database, which is used in other Azure Data Studio tutorials.

Prerequisites
To complete this quickstart, you need Azure Data Studio, and an Azure SQL Database
server.

Install Azure Data Studio

If you don't have an Azure SQL server, complete one of the following Azure SQL
Database quickstarts. Remember the fully qualified server name and sign in credentials
for later steps:

Create DB - Portal
Create DB - CLI
Create DB - PowerShell

Connect to your Azure SQL Database server


Use Azure Data Studio to establish a connection to your Azure SQL Database server.

1. The first time you run Azure Data Studio the Welcome page should open. If you
don't see the Welcome page, select Help > Welcome. Select New Connection to
open the Connection pane:
2. This article uses SQL sign-in, but for Azure SQL Database, Azure AD Universal MFA
authentication is also supported. Fill in the following fields using the server name,
user name, and password for your Azure SQL server:

Setting Suggested value Description

Server name The fully qualified server Something like:


name servername.database.windows.net.

Authentication SQL Login This tutorial uses SQL Authentication.

User name The server admin account The user name from the account used to
user name create the server.

Password (SQL The server admin account The password from the account used to
Login) password create the server.

Save Yes or No Select Yes if you don't want to enter the


Password? password each time.

Database leave blank You're only connecting to the server here.


name

Server Group Select <Default> You can set this field to a specific server
group you created.
3. Select Connect.

4. If your server doesn't have a firewall rule allowing Azure Data Studio to connect,
the Create new firewall rule form opens. Complete the form to create a new
firewall rule. For details, see Firewall rules.

After successfully connecting, your server opens in the SERVERS sidebar.


Create the tutorial database
The next sections create the TutorialDB database that's used in other Azure Data Studio
tutorials.

1. Right-click on your Azure SQL server in the SERVERS sidebar and select New
Query.

2. Paste this SQL into the query editor.

SQL

IF NOT EXISTS (

SELECT name

FROM sys.databases

WHERE name = N'TutorialDB'

CREATE DATABASE [TutorialDB]

GO

ALTER DATABASE [TutorialDB] SET QUERY_STORE=ON

GO

3. From the toolbar, select Run. Notifications appear in the MESSAGES pane showing
query progress.

Create a table
The query editor is connected to the master database, but we want to create a table in
the TutorialDB database.

1. Connect to the TutorialDB database.


2. Create a Customers table.

Replace the previous query in the query editor with this one and select Run.

SQL

-- Create a new table called 'Customers' in schema 'dbo'

-- Drop the table if it already exists

IF OBJECT_ID('dbo.Customers', 'U') IS NOT NULL

DROP TABLE dbo.Customers

GO

-- Create the table in the specified schema

CREATE TABLE dbo.Customers

CustomerId INT NOT NULL PRIMARY KEY, -- primary key


column

Name [NVARCHAR](50) NOT NULL,

Location [NVARCHAR](50) NOT NULL,

Email [NVARCHAR](50) NOT NULL

);

GO

Insert rows into the table


Replace the previous query with this one and select Run.
SQL

-- Insert rows into table 'Customers'

INSERT INTO dbo.Customers

([CustomerId],[Name],[Location],[Email])

VALUES

( 1, N'Orlando', N'Australia', N''),

( 2, N'Keith', N'India', N'keith0@adventure-works.com'),

( 3, N'Donna', N'Germany', N'donna0@adventure-works.com'),

( 4, N'Janet', N'United States', N'janet1@adventure-works.com')

GO

View the result


Replace the previous query with this one and select Run.

SQL

-- Select rows from table 'Customers'

SELECT * FROM dbo.Customers;

The query results display:

Clean up resources
Later quickstart articles build upon the resources created here. If you plan to work
through these articles, be sure not to delete these resources. Otherwise, in the Azure
portal, delete the resources you no longer need. For details, see Clean up resources.

Next steps
Now that you've successfully connected to an Azure SQL database and run a query, try
the Code editor tutorial.
Use Spring Data JDBC with Azure SQL
Database
Article • 04/19/2023

This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JDBC .

JDBC is the standard Java API to connect to traditional relational databases.

In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.

Azure AD authentication is a mechanism for connecting to Azure Database for SQL


Database using identities defined in Azure AD. With Azure AD authentication, you can
manage database user identities and other Microsoft services in a central location, which
simplifies permission management.

SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.

Prerequisites
An Azure subscription - create one for free .

Java Development Kit (JDK), version 8 or higher.

Apache Maven .

Azure CLI.

sqlcmd Utility.

ODBC Driver 17 or 18.

If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JDBC, and MS SQL Server Driver dependencies, and
then select Java version 8 or higher.

See the sample application


In this tutorial, you'll code a sample application. If you want to go faster, this application
is already coded and available at https://github.com/Azure-Samples/quickstart-spring-
data-jdbc-sql-server .

Configure a firewall rule for your Azure SQL


Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't
allow any incoming connection.

To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.

If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.

Create an SQL database non-admin user and


grant permission
This step will create a non-admin user and grant all permissions on the demo database
to it.

Passwordless (Recommended)

To use passwordless connections, see Tutorial: Secure a database in Azure SQL


Database or use Service Connector to create an Azure AD admin user for your
Azure SQL Database server, as shown in the following steps:

1. First, install the Service Connector passwordless extension for the Azure CLI:

Azure CLI
az extension add --name serviceconnector-passwordless --upgrade

2. Then, use the following command to create the Azure AD non-admin user:

Azure CLI

az connection create sql \

--resource-group <your-resource-group-name> \

--connection sql_conn \

--target-resource-group <your-resource-group-name> \

--server sqlservertest \

--database demo \

--user-account \

--query authInfo.userName \

--output tsv

The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.

) Important

Azure SQL database passwordless connections require upgrading the MS SQL


Server Driver to version 12.1.0 or higher. The connection option is
authentication=DefaultAzureCredential in version 12.1.0 and

authentication=ActiveDirectoryDefault in version 12.2.0 .

Store data from Azure SQL Database


With an Azure SQL Database instance, you can store data by using Spring Cloud Azure.

To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:

The Spring Cloud Azure Bill of Materials (BOM):

XML

<dependencyManagement>

<dependencies>

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-dependencies</artifactId>

<version>4.9.0</version>

<type>pom</type>

<scope>import</scope>

</dependency>

</dependencies>

</dependencyManagement>

7 Note

If you're using Spring Boot 3.x, be sure to set the spring-cloud-azure-


dependencies version to 5.3.0 .
For more information about the spring-cloud-

azure-dependencies version, see Which Version of Spring Cloud Azure Should


I Use .

The Spring Cloud Azure Starter artifact:

XML

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-starter</artifactId>

</dependency>

Configure Spring Boot to use Azure SQL Database


To store data from Azure SQL Database using Spring Data JDBC, follow these steps to
configure the application:

1. Configure an Azure SQL Database credentials in the application.properties


configuration file.

Passwordless (Recommended)

properties

logging.level.org.springframework.jdbc.core=DEBUG

spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;

spring.sql.init.mode=always

2 Warning

The configuration property spring.sql.init.mode=always means that Spring


Boot will automatically generate a database schema, using the schema.sql file
that you'll create next, each time the server is started. This is great for testing,
but remember that this will delete your data at each restart, so you shouldn't
use it in production.

2. Create the src/main/resources/schema.sql configuration file to configure the


database schema, then add the following contents.

SQL

DROP TABLE IF EXISTS todo;

CREATE TABLE todo (id INT IDENTITY PRIMARY KEY, description


VARCHAR(255), details VARCHAR(4096), done BIT);

3. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by Spring Boot. The following code ignores
the getters and setters methods.

Java

import org.springframework.data.annotation.Id;

public class Todo {

public Todo() {

public Todo(String description, String details, boolean done) {

this.description = description;

this.details = details;

this.done = done;

@Id

private Long id;

private String description;

private String details;

private boolean done;

4. Edit the startup class file to show the following content.

Java

import org.springframework.boot.SpringApplication;

import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.boot.context.event.ApplicationReadyEvent;

import org.springframework.context.ApplicationListener;

import org.springframework.context.annotation.Bean;

import org.springframework.data.repository.CrudRepository;

import java.util.stream.Stream;

@SpringBootApplication

public class DemoApplication {

public static void main(String[] args) {

SpringApplication.run(DemoApplication.class, args);

@Bean

ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {

return event->repository

.saveAll(Stream.of("A", "B", "C").map(name->new


Todo("configuration", "congratulations, you have set up correctly!",
true)).toList())

.forEach(System.out::println);

interface TodoRepository extends CrudRepository<Todo, Long> {

 Tip

In this tutorial, there are no authentication operations in the configurations or


the code. However, connecting to Azure services requires authentication. To
complete the authentication, you need to use Azure Identity. Spring Cloud
Azure uses DefaultAzureCredential , which the Azure Identity library provides
to help you get credentials without any code changes.

DefaultAzureCredential supports multiple authentication methods and


determines which method to use at runtime. This approach enables your app
to use different authentication methods in different environments (such as
local and production environments) without implementing environment-
specific code. For more information, see the Default Azure credential section
of Authenticate Azure-hosted Java applications.

To complete the authentication in local development environments, you can


use Azure CLI, Visual Studio Code, PowerShell or other methods. For more
information, see Azure authentication in Java development environments. To
complete the authentication in Azure hosting environments, we recommend
using managed identity. For more information, see What are managed
identities for Azure resources?

5. Start the application. The application stores data into the database. You'll see logs
similar to the following example:

shell

2023-02-01 10:22:36.701 DEBUG 7948 --- [main]


o.s.jdbc.core.JdbcTemplate : Executing prepared SQL statement [INSERT
INTO todo (description, details, done) VALUES (?, ?, ?)]

com.example.demo.Todo@4bdb04c8

Deploy to Azure Spring Apps


Now that you have the Spring Boot application running locally, it's time to move it to
production. Azure Spring Apps makes it easy to deploy Spring Boot applications to
Azure without any code changes. The service manages the infrastructure of Spring
applications so developers can focus on their code. Azure Spring Apps provides lifecycle
management using comprehensive monitoring and diagnostics, configuration
management, service discovery, CI/CD integration, blue-green deployments, and more.
To deploy your application to Azure Spring Apps, see Deploy your first application to
Azure Spring Apps.

Next steps
Azure for Spring developers
Use Spring Data JPA with Azure SQL
Database
Article • 04/19/2023

This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JPA .

The Java Persistence API (JPA) is the standard Java API for object-relational mapping.

In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.

Azure AD authentication is a mechanism for connecting to Azure Database for SQL


Database using identities defined in Azure AD. With Azure AD authentication, you can
manage database user identities and other Microsoft services in a central location, which
simplifies permission management.

SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.

Prerequisites
An Azure subscription - create one for free .

Java Development Kit (JDK), version 8 or higher.

Apache Maven .

Azure CLI.

sqlcmd Utility

ODBC Driver 17 or 18.

If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JPA, and MS SQL Server Driver dependencies, and then
select Java version 8 or higher.

) Important

To use passwordless connections, upgrade MS SQL Server Driver to version


12.1.0 or higher, and then create an Azure AD admin user for your Azure SQL
Database server instance. For more information, see the Create an Azure AD admin
section of Tutorial: Secure a database in Azure SQL Database.

See the sample application


In this tutorial, you'll code a sample application. If you want to go faster, this application
is already coded and available at https://github.com/Azure-Samples/quickstart-spring-
data-jpa-sql-server .

Configure a firewall rule for your Azure SQL


Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't
allow any incoming connection.

To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.

If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.

Create an SQL database non-admin user and


grant permission
This step will create a non-admin user and grant all permissions on the demo database
to it.

Passwordless (Recommended)
To use passwordless connections, see Tutorial: Secure a database in Azure SQL
Database or use Service Connector to create an Azure AD admin user for your
Azure SQL Database server, as shown in the following steps:

1. First, install the Service Connector passwordless extension for the Azure CLI:

Azure CLI

az extension add --name serviceconnector-passwordless --upgrade

2. Then, use the following command to create the Azure AD non-admin user:

Azure CLI

az connection create sql \

--resource-group <your-resource-group-name> \

--connection sql_conn \

--target-resource-group <your-resource-group-name> \

--server sqlservertest \

--database demo \

--user-account \

--query authInfo.userName \

--output tsv

The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.

) Important

Azure SQL database passwordless connections require upgrading the MS SQL


Server Driver to version 12.1.0 or higher. The connection option is
authentication=DefaultAzureCredential in version 12.1.0 and
authentication=ActiveDirectoryDefault in version 12.2.0 .

Store data from Azure SQL Database


With an Azure SQL Database instance, you can store data by using Spring Cloud Azure.

To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:

The Spring Cloud Azure Bill of Materials (BOM):


XML

<dependencyManagement>

<dependencies>

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-dependencies</artifactId>

<version>4.9.0</version>

<type>pom</type>

<scope>import</scope>

</dependency>

</dependencies>

</dependencyManagement>

7 Note

If you're using Spring Boot 3.x, be sure to set the spring-cloud-azure-


dependencies version to 5.3.0 .
For more information about the spring-cloud-

azure-dependencies version, see Which Version of Spring Cloud Azure Should


I Use .

The Spring Cloud Azure Starter artifact:

XML

<dependency>

<groupId>com.azure.spring</groupId>

<artifactId>spring-cloud-azure-starter</artifactId>

</dependency>

Configure Spring Boot to use Azure SQL Database


To store data from Azure SQL Database using Spring Data JPA, follow these steps to
configure the application:

1. Configure an Azure SQL Database credentials in the application.properties


configuration file.

Passwordless (Recommended)

properties

logging.level.org.hibernate.SQL=DEBUG

spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;

spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLSe
rver2016Dialect

spring.jpa.hibernate.ddl-auto=create-drop

2 Warning

The configuration property spring.jpa.hibernate.ddl-auto=create-drop


means that Spring Boot will automatically create a database schema at
application start-up, and will try to delete it when it shuts down. This feature is
great for testing, but remember that it will delete your data at each restart, so
you shouldn't use it in production.

2. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by JPA. The following code ignores the
getters and setters methods.

Java

package com.example.demo;

import javax.persistence.Entity;

import javax.persistence.GeneratedValue;

import javax.persistence.Id;

@Entity

public class Todo {

public Todo() {

public Todo(String description, String details, boolean done) {

this.description = description;

this.details = details;

this.done = done;

@Id

@GeneratedValue

private Long id;

private String description;

private String details;

private boolean done;

3. Edit the startup class file to show the following content.

Java

import org.springframework.boot.SpringApplication;

import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.boot.context.event.ApplicationReadyEvent;

import org.springframework.context.ApplicationListener;

import org.springframework.context.annotation.Bean;

import org.springframework.data.jpa.repository.JpaRepository;

import java.util.stream.Collectors;

import java.util.stream.Stream;

@SpringBootApplication

public class DemoApplication {

public static void main(String[] args) {

SpringApplication.run(DemoApplication.class, args);

@Bean

ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {

return event->repository

.saveAll(Stream.of("A", "B", "C").map(name->new


Todo("configuration", "congratulations, you have set up correctly!",
true)).collect(Collectors.toList()))

.forEach(System.out::println);

interface TodoRepository extends JpaRepository<Todo, Long> {

 Tip

In this tutorial, there are no authentication operations in the configurations or


the code. However, connecting to Azure services requires authentication. To
complete the authentication, you need to use Azure Identity. Spring Cloud
Azure uses DefaultAzureCredential , which the Azure Identity library provides
to help you get credentials without any code changes.
DefaultAzureCredential supports multiple authentication methods and

determines which method to use at runtime. This approach enables your app
to use different authentication methods in different environments (such as
local and production environments) without implementing environment-
specific code. For more information, see the Default Azure credential section
of Authenticate Azure-hosted Java applications.

To complete the authentication in local development environments, you can


use Azure CLI, Visual Studio Code, PowerShell or other methods. For more
information, see Azure authentication in Java development environments. To
complete the authentication in Azure hosting environments, we recommend
using managed identity. For more information, see What are managed
identities for Azure resources?

4. Start the application. You'll see logs similar to the following example:

shell

2023-02-01 10:29:19.763 DEBUG 4392 --- [main] org.hibernate.SQL :


insert into todo (description, details, done, id) values (?, ?, ?, ?)

com.example.demo.Todo@1f

Deploy to Azure Spring Apps


Now that you have the Spring Boot application running locally, it's time to move it to
production. Azure Spring Apps makes it easy to deploy Spring Boot applications to
Azure without any code changes. The service manages the infrastructure of Spring
applications so developers can focus on their code. Azure Spring Apps provides lifecycle
management using comprehensive monitoring and diagnostics, configuration
management, service discovery, CI/CD integration, blue-green deployments, and more.
To deploy your application to Azure Spring Apps, see Deploy your first application to
Azure Spring Apps.

Next steps
Azure for Spring developers
Use Spring Data R2DBC with Azure SQL
Database
Article • 05/26/2023

This article demonstrates creating a sample application that uses Spring Data R2DBC
to store and retrieve information in Azure SQL Database by using the R2DBC
implementation for Microsoft SQL Server from the r2dbc-mssql GitHub repository .

R2DBC brings reactive APIs to traditional relational databases. You can use it with
Spring WebFlux to create fully reactive Spring Boot applications that use non-blocking
APIs. It provides better scalability than the classic "one thread per connection" approach.

Prerequisites
An Azure subscription - create one for free .

Java Development Kit (JDK), version 8 or higher.

Apache Maven .

Azure CLI.

sqlcmd Utility.

cURL or a similar HTTP utility to test functionality.

See the sample application


In this article, you'll code a sample application. If you want to go faster, this application
is already coded and available at https://github.com/Azure-Samples/quickstart-spring-
data-r2dbc-sql-server .

Prepare the working environment


First, set up some environment variables by using the following commands:

Bash

export AZ_RESOURCE_GROUP=database-workshop
export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
export AZ_LOCATION=<YOUR_AZURE_REGION>
export AZ_SQL_SERVER_ADMIN_USERNAME=spring
export AZ_SQL_SERVER_ADMIN_PASSWORD=<YOUR_AZURE_SQL_ADMIN_PASSWORD>
export AZ_SQL_SERVER_NON_ADMIN_USERNAME=nonspring
export AZ_SQL_SERVER_NON_ADMIN_PASSWORD=<YOUR_AZURE_SQL_NON_ADMIN_PASSWORD>
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>

Replace the placeholders with the following values, which are used throughout this
article:

<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server, which should

be unique across Azure.


<YOUR_AZURE_REGION> : The Azure region you'll use. You can use eastus by default,

but we recommend that you configure a region closer to where you live. You can
see the full list of available regions by using az account list-locations .
<AZ_SQL_SERVER_ADMIN_PASSWORD> and <AZ_SQL_SERVER_NON_ADMIN_PASSWORD> : The

password of your Azure SQL Database server, which should have a minimum of
eight characters. The characters should be from three of the following categories:
English uppercase letters, English lowercase letters, numbers (0-9), and non-
alphanumeric characters (!, $, #, %, and so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll

run your Spring Boot application. One convenient way to find it is to open
whatismyip.akamai.com .

Next, create a resource group by using the following command:

Azure CLI

az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
--output tsv

Create an Azure SQL Database instance


Next, create a managed Azure SQL Database server instance by running the following
command.

7 Note

The MS SQL password has to meet specific criteria, and setup will fail with a non-
compliant password. For more information, see Password Policy.

Azure CLI
az sql server create \
--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME \
--location $AZ_LOCATION \
--admin-user $AZ_SQL_SERVER_ADMIN_USERNAME \
--admin-password $AZ_SQL_SERVER_ADMIN_PASSWORD \
--output tsv

Configure a firewall rule for your Azure SQL


Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't
allow any incoming connection. To be able to use your database, you need to add a
firewall rule that will allow the local IP address to access the database server.

Because you configured your local IP address at the beginning of this article, you can
open the server's firewall by running the following command:

Azure CLI

az sql server firewall-rule create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME-database-allow-local-ip \
--server $AZ_DATABASE_NAME \
--start-ip-address $AZ_LOCAL_IP_ADDRESS \
--end-ip-address $AZ_LOCAL_IP_ADDRESS \
--output tsv

If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.

Obtain the IP address of your host machine by running the following command in WSL:

Bash

cat /etc/resolv.conf

Copy the IP address following the term nameserver , then use the following command to
set an environment variable for the WSL IP Address:

Bash

export AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
Then, use the following command to open the server's firewall to your WSL-based app:

Azure CLI

az sql server firewall-rule create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME-database-allow-local-ip-wsl \
--server $AZ_DATABASE_NAME \
--start-ip-address $AZ_WSL_IP_ADDRESS \
--end-ip-address $AZ_WSL_IP_ADDRESS \
--output tsv

Configure an Azure SQL database


The Azure SQL Database server that you created earlier is empty. It doesn't have any
database that you can use with the Spring Boot application. Create a new database
called demo by running the following command:

Azure CLI

az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
--output tsv

Create an SQL database non-admin user and


grant permission
This step will create a non-admin user and grant all permissions on the demo database
to it.

Create a SQL script called create_user.sql for creating a non-admin user. Add the
following contents and save it locally:

Bash

cat << EOF > create_user.sql


USE demo;
GO
CREATE USER $AZ_SQL_SERVER_NON_ADMIN_USERNAME WITH
PASSWORD='$AZ_SQL_SERVER_NON_ADMIN_PASSWORD'
GO
GRANT CONTROL ON DATABASE::demo TO $AZ_SQL_SERVER_NON_ADMIN_USERNAME;
GO
EOF

Then, use the following command to run the SQL script to create the non-admin user:

Bash

sqlcmd -S $AZ_DATABASE_NAME.database.windows.net,1433 -d demo -U


$AZ_SQL_SERVER_ADMIN_USERNAME -P $AZ_SQL_SERVER_ADMIN_PASSWORD -i
create_user.sql

7 Note

For more information about creating SQL database users, see CREATE USER
(Transact-SQL).

Create a reactive Spring Boot application


To create a reactive Spring Boot application, we'll use Spring Initializr . The application
that we'll create uses:

Spring Boot 2.7.11.


The following dependencies: Spring Reactive Web (also known as Spring WebFlux)
and Spring Data R2DBC.

Generate the application by using Spring


Initializr
Generate the application on the command line by running the following command:

Bash

curl https://start.spring.io/starter.tgz -d dependencies=webflux,data-r2dbc


-d baseDir=azure-database-workshop -d bootVersion=2.7.11 -d javaVersion=17 |
tar -xzvf -

Add the reactive Azure SQL Database driver


implementation
Open the generated project's pom.xml file to add the reactive Azure SQL Database
driver from the r2dbc-mssql GitHub repository .

After the spring-boot-starter-webflux dependency, add the following text:

XML

<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-mssql</artifactId>
<scope>runtime</scope>
</dependency>

Configure Spring Boot to use Azure SQL Database


Open the src/main/resources/application.properties file, and add the following text:

properties

logging.level.org.springframework.data.r2dbc=DEBUG

spring.r2dbc.url=r2dbc:pool:mssql://$AZ_DATABASE_NAME.database.windows.net:1
433/demo
spring.r2dbc.username=nonspring@$AZ_DATABASE_NAME
spring.r2dbc.password=$AZ_SQL_SERVER_NON_ADMIN_PASSWORD

Replace the two $AZ_DATABASE_NAME variables and the


$AZ_SQL_SERVER_NON_ADMIN_PASSWORD variable with the values that you configured at the

beginning of this article.

7 Note

For better performance, the spring.r2dbc.url property is configured to use a


connection pool using r2dbc-pool .

You should now be able to start your application by using the provided Maven wrapper
as follows:

Bash

./mvnw spring-boot:run

Here's a screenshot of the application running for the first time:


Create the database schema


Inside the main DemoApplication class, configure a new Spring bean that will create a
database schema, using the following code:

Java

package com.example.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.core.io.ClassPathResource;
import
org.springframework.data.r2dbc.connectionfactory.init.ConnectionFactoryIniti
alizer;
import
org.springframework.data.r2dbc.connectionfactory.init.ResourceDatabasePopula
tor;

import io.r2dbc.spi.ConnectionFactory;

@SpringBootApplication
public class DemoApplication {

public static void main(String[] args) {


SpringApplication.run(DemoApplication.class, args);
}

@Bean
public ConnectionFactoryInitializer initializer(ConnectionFactory
connectionFactory) {
ConnectionFactoryInitializer initializer = new
ConnectionFactoryInitializer();
initializer.setConnectionFactory(connectionFactory);
ResourceDatabasePopulator populator = new
ResourceDatabasePopulator(new ClassPathResource("schema.sql"));
initializer.setDatabasePopulator(populator);
return initializer;
}
}
This Spring bean uses a file called schema.sql, so create that file in the
src/main/resources folder, and add the following text:

SQL

DROP TABLE IF EXISTS todo;


CREATE TABLE todo (id INT IDENTITY PRIMARY KEY, description VARCHAR(255),
details VARCHAR(4096), done BIT);

Stop the running application, and start it again using the following command. The
application will now use the demo database that you created earlier, and create a todo
table inside it.

Bash

./mvnw spring-boot:run

Here's a screenshot of the database table as it's being created:

Code the application


Next, add the Java code that will use R2DBC to store and retrieve data from your Azure
SQL Database server.

Create a new Todo Java class, next to the DemoApplication class, using the following
code:

Java

package com.example.demo;

import org.springframework.data.annotation.Id;

public class Todo {

public Todo() {
}

public Todo(String description, String details, boolean done) {


this.description = description;
this.details = details;
this.done = done;
}

@Id
private Long id;

private String description;

private String details;

private boolean done;

public Long getId() {


return id;
}

public void setId(Long id) {


this.id = id;
}

public String getDescription() {


return description;
}

public void setDescription(String description) {


this.description = description;
}

public String getDetails() {


return details;
}

public void setDetails(String details) {


this.details = details;
}

public boolean isDone() {


return done;
}

public void setDone(boolean done) {


this.done = done;
}
}

This class is a domain model mapped on the todo table that you created before.

To manage that class, you need a repository. Define a new TodoRepository interface in
the same package, using the following code:

Java
package com.example.demo;

import org.springframework.data.repository.reactive.ReactiveCrudRepository;

public interface TodoRepository extends ReactiveCrudRepository<Todo, Long> {


}

This repository is a reactive repository that Spring Data R2DBC manages.

Finish the application by creating a controller that can store and retrieve data.
Implement a TodoController class in the same package, and add the following code:

Java

package com.example.demo;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

@RestController
@RequestMapping("/")
public class TodoController {

private final TodoRepository todoRepository;

public TodoController(TodoRepository todoRepository) {


this.todoRepository = todoRepository;
}

@PostMapping("/")
@ResponseStatus(HttpStatus.CREATED)
public Mono<Todo> createTodo(@RequestBody Todo todo) {
return todoRepository.save(todo);
}

@GetMapping("/")
public Flux<Todo> getTodos() {
return todoRepository.findAll();
}
}

Finally, halt the application and start it again using the following command:

Bash

./mvnw spring-boot:run
Test the application
To test the application, you can use cURL.

First, create a new "todo" item in the database using the following command:

Bash

curl --header "Content-Type: application/json" \


--request POST \
--data '{"description":"configuration","details":"congratulations, you
have set up R2DBC correctly!","done": "true"}' \
http://127.0.0.1:8080

This command should return the created item, as shown here:

JSON

{"id":1,"description":"configuration","details":"congratulations, you have


set up R2DBC correctly!","done":true}

Next, retrieve the data by using a new cURL request with the following command:

Bash

curl http://127.0.0.1:8080

This command will return the list of "todo" items, including the item you've created, as
shown here:

JSON

[{"id":1,"description":"configuration","details":"congratulations, you have


set up R2DBC correctly!","done":true}]

Here's a screenshot of these cURL requests:

Congratulations! You've created a fully reactive Spring Boot application that uses R2DBC
to store and retrieve data from Azure SQL Database.
Clean up resources
To clean up all resources used during this quickstart, delete the resource group by using
the following command:

Azure CLI

az group delete \
--name $AZ_RESOURCE_GROUP \
--yes

Next steps
To learn more about deploying a Spring Data application to Azure Spring Apps and
using managed identity, see Tutorial: Deploy a Spring application to Azure Spring Apps
with a passwordless connection to an Azure database.

To learn more about Spring and Azure, continue to the Spring on Azure documentation
center.

Spring on Azure

See also
For more information about Spring Data R2DBC, see Spring's reference
documentation .

For more information about using Azure with Java, see Azure for Java developers and
Working with Azure DevOps and Java.
Create and use append-only ledger
tables
Article • 05/23/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

This article shows you how to create an append-only ledger table. Next, you'll insert
values in your append-only ledger table and then attempt to make updates to the data.
Finally, you'll view the results by using the ledger view. We'll use an example of a card
key access system for a facility, which is an append-only system pattern. Our example
will give you a practical look at the relationship between the append-only ledger table
and its corresponding ledger view.

For more information, see Append-only ledger tables.

Prerequisites
SQL Server Management Studio or Azure Data Studio.

Create an append-only ledger table


We'll create a KeyCardEvents table with the following schema.

Column name Data type Description

EmployeeID int The unique ID of the employee accessing the


building

AccessOperationDescription nvarchar The access operation of the employee


(MAX)

Timestamp datetime2 The date and time the employee accessed the
building

1. Use SQL Server Management Studio or Azure Data Studio to create a new schema
and table called [AccessControl].[KeyCardEvents] .

SQL

CREATE SCHEMA [AccessControl];

GO

CREATE TABLE [AccessControl].[KeyCardEvents]

[EmployeeID] INT NOT NULL,

[AccessOperationDescription] NVARCHAR (1024) NOT NULL,

[Timestamp] Datetime2 NOT NULL

WITH (LEDGER = ON (APPEND_ONLY = ON));

2. Add a new building access event in the [AccessControl].[KeyCardEvents] table


with the following values.

SQL

INSERT INTO [AccessControl].[KeyCardEvents]

VALUES ('43869', 'Building42', '2020-05-02T19:58:47.1234567');

3. View the contents of your KeyCardEvents table, and specify the GENERATED
ALWAYS columns that are added to your append-only ledger table.

SQL

SELECT *

,[ledger_start_transaction_id]

,[ledger_start_sequence_number]

FROM [AccessControl].[KeyCardEvents];

4. View the contents of your KeyCardEvents ledger view along with the ledger
transactions system view to identify who added records into the table.

SQL

SELECT

t.[commit_time] AS [CommitTime]

, t.[principal_name] AS [UserName]

, l.[EmployeeID]

, l.[AccessOperationDescription]
, l.[Timestamp]

, l.[ledger_operation_type_desc] AS Operation

FROM [AccessControl].[KeyCardEvents_Ledger] l

JOIN sys.database_ledger_transactions t

ON t.transaction_id = l.ledger_transaction_id

ORDER BY t.commit_time DESC;

5. Try to update the KeyCardEvents table by changing the EmployeeID from 43869 to
34184.

SQL

UPDATE [AccessControl].[KeyCardEvents] SET [EmployeeID] = 34184;

You'll receive an error message that states the updates aren't allowed for your
append-only ledger table.

Permissions
Creating append-only ledger tables requires the ENABLE LEDGER permission. For more
information on permissions related to ledger tables, see Permissions.

Next steps
Append-only ledger tables
How to migrate data from regular tables to ledger tables
Create and use updatable ledger tables
Article • 05/24/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

This article shows you how to create an updatable ledger table. Next, you'll insert values
in your updatable ledger table and then make updates to the data. Finally, you'll view
the results by using the ledger view. We'll use an example of a banking application that
tracks banking customers' balances in their accounts. Our example will give you a
practical look at the relationship between the updatable ledger table and its
corresponding history table and ledger view.

Prerequisites
SQL Server Management Studio or Azure Data Studio.

Create an updatable ledger table


We'll create an account balance table with the following schema.

Column name Data type Description

CustomerID int Customer ID - Primary key clustered

LastName varchar (50) Customer last name

FirstName varchar (50) Customer first name

Balance decimal (10,2) Account balance

1. Use SQL Server Management Studio or Azure Data Studio to create a new schema
and table called [Account].[Balance] .

SQL

CREATE SCHEMA [Account];

GO

CREATE TABLE [Account].[Balance]

[CustomerID] INT NOT NULL PRIMARY KEY CLUSTERED,

[LastName] VARCHAR (50) NOT NULL,

[FirstName] VARCHAR (50) NOT NULL,

[Balance] DECIMAL (10,2) NOT NULL

WITH

SYSTEM_VERSIONING = ON (HISTORY_TABLE = [Account].[BalanceHistory]),

LEDGER = ON

);

7 Note

Specifying the LEDGER = ON argument is optional if you enabled a ledger


database when you created your database.

2. When your updatable ledger table is created, the corresponding history table and
ledger view are also created. Run the following T-SQL commands to see the new
table and the new view.

SQL

SELECT

ts.[name] + '.' + t.[name] AS [ledger_table_name]

, hs.[name] + '.' + h.[name] AS [history_table_name]

, vs.[name] + '.' + v.[name] AS [ledger_view_name]

FROM sys.tables AS t

JOIN sys.tables AS h ON (h.[object_id] = t.[history_table_id])

JOIN sys.views v ON (v.[object_id] = t.[ledger_view_id])

JOIN sys.schemas ts ON (ts.[schema_id] = t.[schema_id])

JOIN sys.schemas hs ON (hs.[schema_id] = h.[schema_id])

JOIN sys.schemas vs ON (vs.[schema_id] = v.[schema_id])

WHERE t.[name] = 'Balance';

3. Insert the name Nick Jones as a new customer with an opening balance of $50.

SQL

INSERT INTO [Account].[Balance]

VALUES (1, 'Jones', 'Nick', 50);

4. Insert the names John Smith , Joe Smith , and Mary Michaels as new customers
with opening balances of $500, $30, and $200, respectively.

SQL
INSERT INTO [Account].[Balance]

VALUES (2, 'Smith', 'John', 500),

(3, 'Smith', 'Joe', 30),

(4, 'Michaels', 'Mary', 200);

5. View the [Account].[Balance] updatable ledger table, and specify the GENERATED
ALWAYS columns added to the table.

SQL

SELECT [CustomerID]

,[LastName]

,[FirstName]

,[Balance]

,[ledger_start_transaction_id]
,[ledger_end_transaction_id]

,[ledger_start_sequence_number]

,[ledger_end_sequence_number]

FROM [Account].[Balance];

In the results window, you'll first see the values inserted by your T-SQL commands,
along with the system metadata that's used for data lineage purposes.

The ledger_start_transaction_id column notes the unique transaction ID


associated with the transaction that inserted the data. Because John , Joe ,
and Mary were inserted by using the same transaction, they share the same
transaction ID.

The ledger_start_sequence_number column notes the order by which values


were inserted by the transaction.

6. Update Nick 's balance from 50 to 100 .

SQL

UPDATE [Account].[Balance] SET [Balance] = 100

WHERE [CustomerID] = 1;

7. View the [Account].[Balance] ledger view, along with the transaction ledger
system view to identify users that made the changes.
SQL

SELECT

t.[commit_time] AS [CommitTime]

, t.[principal_name] AS [UserName]

, l.[CustomerID]

, l.[LastName]

, l.[FirstName]

, l.[Balance]

, l.[ledger_operation_type_desc] AS Operation

FROM [Account].[Balance_Ledger] l

JOIN sys.database_ledger_transactions t

ON t.transaction_id = l.ledger_transaction_id

ORDER BY t.commit_time DESC;

 Tip

We recommend that you query the history of changes through the ledger
view and not the history table.

Nick 's account balance was successfully updated in the updatable ledger table to
100 .

The ledger view shows that updating the ledger table is a DELETE of the original
row with 50 . The balance with a corresponding INSERT of a new row with 100
shows the new balance for Nick .

Permissions
Creating updatable ledger tables requires the ENABLE LEDGER permission. For more
information on permissions related to ledger tables, see Permissions.
Next steps
Database ledger
Updatable ledger tables
Append-only ledger tables
How to migrate data from regular tables to ledger tables
Migrate data from regular tables to
ledger tables
Article • 05/23/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

Converting regular tables to ledger tables isn't possible, but you can migrate the data
from an existing regular table to a ledger table, and then replace the original table with
the ledger table.

When you're performing a database ledger verification, the process needs to order all
operations within each transaction. If you use a SELECT INTO or BULK INSERT statement
to copy a few billion rows from a regular table to a ledger table, it will all be done in one
single transaction. This means lots of data needs to be fully sorted, which will be done in
a single thread. The sorting operation takes a long time to complete.

To convert a regular table into a ledger table, Microsoft recommends using the
sys.sp_copy_data_in_batches stored procedure. This splits the copy operation in batches
of 10-100 K rows per transaction. As a result, the database ledger verification has
smaller transactions that can be sorted in parallel. This helps the time of the database
ledger verification tremendously.

7 Note

The customer can still use other commands, services, or tools to copy the data from
the source table to the target table. Make sure you avoid large transactions
because this will have a performance impact on the database ledger verification.

This article shows you how can convert a regular table into a ledger table.

Prerequisites
SQL Server Management Studio or Azure Data Studio.

Create an append-only or updatable ledger


table
Before you can use the sys.sp_copy_data_in_batches stored procedure, you need to
create an append-only ledger table or updatable ledger table with the same schema as
the source table. The schema should be identical in terms of number of columns,
column names, and their data types. TRANSACTION ID , SEQUENCE NUMBER , and GENERATED
ALWAYS columns are ignored since they're system generated. Indexes between the
tables can be different but the target table can only be a Heap table or have a clustered
index. Non-clustered indexes should be created afterwards.

Assume we have the following regular Employees table in the database.

SQL

CREATE TABLE [dbo].[Employees](

[EmployeeID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,

[SSN] [char](11) NOT NULL,

[FirstName] [nvarchar](50) NOT NULL,

[LastName] [nvarchar](50) NOT NULL,

[Salary] [money] NOT NULL

);

The easiest way to create an append-only ledger table or updatable ledger table is
scripting the original table and add the LEDGER = ON clause. In the script below, we're
creating a new updatable ledger table, called Employees_LedgerTable based on the
schema of the Employees table.

SQL

CREATE TABLE [dbo].[Employees_LedgerTable](

[EmployeeID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,

[SSN] [char](11) NOT NULL,

[FirstName] [nvarchar](50) NOT NULL,

[LastName] [nvarchar](50) NOT NULL,

[Salary] [money] NOT NULL

WITH

SYSTEM_VERSIONING = ON,

LEDGER = ON

);

Copy data from a regular table to a ledger


table
The stored procedure sys.sp_copy_data_in_batches copies data from the source table to
the target table after verifying that their schema is identical. The data is copied in
batches in individual transactions. If the operation fails, the target table is partially
populated. The target table should also be empty.

In the script below, we're copying the data from the regular Employees table to the new
updatable ledger table, Employees_LedgerTable .

SQL

sp_copy_data_in_batches @source_table_name = N'Employees' ,


@target_table_name = N'Employees_LedgerTable'

Next steps
Append-only ledger tables
Updatable ledger tables
Configure a ledger database
Article • 07/14/2023

Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance

This article provides information on configuring a ledger database using the Azure
portal, T-SQL, PowerShell, or the Azure CLI for Azure SQL Database. For information on
creating a ledger database in SQL Server 2022 (16.x) or Azure SQL Managed Instance,
use the switch at the top of this page.

Prerequisites
Have an active Azure subscription. If you don't have one, create a free account .
A logical server.

Enable ledger database

7 Note

Enabling the ledger functionality at the database level will make all tables in this
database updatable ledger tables. This option cannot be changed after the
database is created. Creating a table with the option LEDGER = OFF will throw an
error message.

Portal

1. Open the Azure portal and create an Azure SQL Database .

2. On the Security tab, select Configure ledger.


3. On the Configure ledger pane, select Enable for all future tables in this
database.

4. Select Apply to save this setting.

Next steps
Ledger overview
Append-only ledger tables
Updatable ledger tables
Enable automatic digest storage
Verify a ledger table to detect
tampering
Article • 03/03/2023

Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance

In this article, you'll verify the integrity of the data in your ledger tables. If you've
configured the Automatic digest storage on your database, follow the T-SQL using
automatic digest storage section. Otherwise, follow the T-SQL using a manual generated
digest section.

Prerequisites
Have an active Azure subscription if you're using Azure SQL Database or Azure SQL
Managed Instance. If you don't have one, create a free account .
Create and use updatable ledger tables or create and use append-only ledger
tables.
SQL Server Management Studio or Azure Data Studio.
The database option ALLOW_SNAPSHOT_ISOLATION has to be enabled on the
database before you can run the verifcation stored procedures.

Run ledger verification for the database


T-SQL using automatic digest storage

1. Connect to your database by using SQL Server Management Studio or Azure


Data Studio.

2. Create a new query with the following T-SQL statement:

SQL

DECLARE @digest_locations NVARCHAR(MAX) = (SELECT * FROM


sys.database_ledger_digest_locations FOR JSON AUTO,
INCLUDE_NULL_VALUES);SELECT @digest_locations as digest_locations;

BEGIN TRY

EXEC sys.sp_verify_database_ledger_from_digest_storage
@digest_locations;

SELECT 'Ledger verification succeeded.' AS Result;

END TRY

BEGIN CATCH

THROW;

END CATCH

7 Note

The verification script can also be found in the Azure portal. Open the
Azure portal and locate the database you want to verify. In Security,
select the Ledger option. In the Ledger pane, select </> Verify database.

3. Execute the query. You'll see that digest_locations returns the current location
of where your database digests are stored and any previous locations. Result
returns the success or failure of ledger verification.

4. Open the digest_locations result set to view the locations of your digests. The
following example shows two digest storage locations for this database:

path indicates the location of the digests.

last_digest_block_id indicates the block ID of the last digest stored in the


path location.
is_current indicates whether the location in path is the current (true) or
previous (false) one.

JSON

"path":
"https:\/\/digest1.blob.core.windows.net\/sqldbledgerdigests\/
janderstestportal2server\/jandersnewdb\/2021-05-
20T04:39:47.6570000",

"last_digest_block_id": 10016,

"is_current": true

},

"path": "https:\/\/jandersneweracl.confidential-
ledger.azure.com\/sqldbledgerdigests\/janderstestportal2server
\/jandersnewdb\/2021-05-20T04:39:47.6570000",

"last_digest_block_id": 1704,

"is_current": false

) Important

When you run ledger verification, inspect the location of digest_locations


to ensure digests used in verification are retrieved from the locations you
expect. You want to make sure that a privileged user hasn't changed
locations of the digest storage to an unprotected storage location, such
as Azure Storage, without a configured and locked immutability policy.

5. Verification returns the following message in the Results window.

If there was no tampering in your database, the message is:

Output

Ledger verification successful

If there was tampering in your database, the following error appears in


the Messages window:

Output

Failed to execute query. Error: The hash of block xxxx in the


database ledger doesn't match the hash provided in the digest
for this block.

Next steps
Ledger overview
sys.database_ledger_digest_locations
sp_verify_database_ledger_from_digest_storage
sp_verify_database_ledger
sp_generate_database_ledger_digest
Enable vulnerability assessment on your
Azure SQL databases
Article • 05/18/2023

In this article, you'll learn how to enable vulnerability assessment so you can find and
remediate database vulnerabilities. We recommend that you enable vulnerability
assessment using the express configuration so you aren't dependent on a storage
account. You can also enable vulnerability assessment using the classic configuration.

When you enable the Defender for Azure SQL plan in Defender for Cloud, Defender for
Cloud automatically enables Advanced Threat Protection and vulnerability assessment
with the express configuration for all Azure SQL databases in the selected subscription.

If you have Azure SQL databases with vulnerability assessment disabled, you can
enable vulnerability assessment in the express or classic configuration.
If you have Azure SQL databases with vulnerability assessment enabled in the
classic configuration, you can enable the express configuration so that assessments
don't require a storage account.

Prerequisites
Make sure that Microsoft Defender for Azure SQL is enabled so that you can run
scans on your Azure SQL databases.
Make sure you read and understand the differences between the express and
classic configuration.

Enable vulnerability assessment


When you enable the Defender for Azure SQL plan in Defender for Cloud, Defender for
Cloud automatically enables Advanced Threat Protection and vulnerability assessment
with the express configuration for all Azure SQL databases in the selected subscription.

You can enable vulnerability assessment in two ways:

Express configuration
Classic configuration

Express configuration
To enable vulnerability assessment without a storage account, using the express
configuration:

1. Sign in to the Azure portal .

2. Open the specific Azure SQL Database resource.

3. Under the Security heading, select Defender for Cloud.

4. Enable the express configuration of vulnerability assessment:

) Important

Baselines and scan history are not migrated.

If vulnerability assessment is not configured, select Enable in the notice that


prompts you to enable the vulnerability assessment express configuration,
and confirm the change.

You can also select Configure and then select Enable in the Microsoft


Defender for SQL settings:

Select Enable to use the vulnerability assessment express configuration.

If vulnerability assessment is already configured, select Enable in the notice


that prompts you to switch to express configuration, and confirm the change.
You can also select Configure and then select Enable in the Microsoft
Defender for SQL settings:

Now you can go to the SQL databases should have vulnerability findings resolved
recommendation to see the vulnerabilities found in your databases. You can also run
on-demand vulnerability assessment scans to see the current findings.

7 Note

Each database is randomly assigned a scan time on a set day of the week.

Enable express vulnerability assessment at scale

If you have SQL resources that don't have Advanced Threat Protection and vulnerability
assessment enabled, you can use the SQL vulnerability assessment APIs to enable SQL
vulnerability assessment with the express configuration at scale.

Classic configuration
To enable vulnerability assessment with a storage account, use the classic configuration:

1. In the Azure portal , open the specific resource in Azure SQL Database, SQL
Managed Instance Database, or Azure Synapse.

2. Under the Security heading, select Defender for Cloud.


3. Select Configure on the link to open the Microsoft Defender for SQL settings pane
for either the entire server or managed instance.

4. In the Server settings page, enter the Microsoft Defender for SQL settings:
a. Configure a storage account where your scan results for all databases on the
server or managed instance will be stored. For information about storage
accounts, see About Azure storage accounts.

b. To configure vulnerability assessments to automatically run weekly scans to


detect security misconfigurations, set Periodic recurring scans to On. The
results are sent to the email addresses you provide in Send scan reports to. You
can also send email notification to admins and subscription owners by enabling
Also send email notification to admins and subscription owners.

7 Note

Each database is randomly assigned a scan time on a set day of the week.
Email notifications are scheduled randomly per server on a set day of the
week. The email notification report includes data from all recurring
database scans that were executed during the preceding week (does not
include on-demand scans).

Next steps
Learn more about:

Microsoft Defender for Azure SQL


Data discovery and classification
Storing scan results in a storage account behind firewalls and VNets
Manage vulnerability findings in your
Azure SQL databases
Article • 06/19/2023

Microsoft Defender for Cloud provides vulnerability assessment for your Azure SQL
databases. Vulnerability assessment scans your databases for software vulnerabilities
and provides a list of findings. You can use the findings to remediate software
vulnerabilities and disable findings.

Prerequisites
Make sure that you know whether you're using the express or classic configurations
before you continue.

To see which configuration you're using:

1. In the Azure portal , open the specific resource in Azure SQL Database, SQL
Managed Instance Database, or Azure Synapse.
2. Under the Security heading, select Defender for Cloud.
3. In the Enablement Status, select Configure to open the Microsoft Defender for
SQL settings pane for either the entire server or managed instance.

If the vulnerability settings show the option to configure a storage account, you're using
the classic configuration. If not, you're using the express configuration.

Express configuration
Classic configuration

Express configuration

View scan history


Select Scan History in the vulnerability assessment pane to view a history of all scans
previously run on this database.

Express configuration doesn't store scan results if they're identical to previous scans. The
scan time shown in the scan history is the time of the last scan where the scan results
changed.
Disable specific findings from Microsoft Defender for
Cloud (preview)
If you have an organizational need to ignore a finding rather than remediate it, you can
disable the finding. Disabled findings don't impact your secure score or generate
unwanted noise. You can see the disabled finding in the "Not applicable" section of the
scan results.

When a finding matches the criteria you've defined in your disable rules, it won't appear
in the list of findings. Typical scenarios may include:

Disable findings with medium or lower severity


Disable findings that are non-patchable
Disable findings from benchmarks that aren't of interest for a defined scope

) Important

To disable specific findings, you need permissions to edit a policy in Azure Policy.
Learn more in Azure RBAC permissions in Azure Policy.

To create a rule:

1. From the recommendations detail page for Vulnerability assessment findings on


your SQL servers on machines should be remediated, select Disable rule.

2. Select the relevant scope.

3. Define your criteria. You can use any of the following criteria:

Finding ID
Severity
Benchmarks

4. Create a disable rule for VA findings on SQL servers on machines

5. Select Apply rule. Changes might take up to 24 hrs to take effect.

6. To view, override, or delete a rule:


a. Select Disable rule.
b. From the scope list, subscriptions with active rules show as Rule applied.
c. To view or delete the rule, select the ellipsis menu ("...").

Configure email notifications using Azure Logic Apps


To receive regular updates of the vulnerability assessment status for your database, you
can use the customizable Azure Logic Apps template .

Using the template will allow you to:

Choose the timing of the email reports.


Have a consistent view of your vulnerability assessment status that includes
disabled rules.
Send reports for Azure SQL Servers and SQL VMs.
Customize report structure and look-and-feel to match your organizational
standards.

Manage vulnerability assessments programmatically


The express configuration is supported in the latest REST API version with the following
functionality:

Description Scope API

Baseline bulk operations System Sql Vulnerability Assessment


Database Baselines

Sql Vulnerability Assessment Baseline

Baseline bulk operations User Database Sql Vulnerability


Database Assessment Baselines

Single rule baseline operations User Database Sql Vulnerability


Database Assessment Rule Baselines

Single rule baseline operations System Sql Vulnerability Assessment Rule


Database Baselines

Sql Vulnerability Assessment Rule


Baseline

Single scan results User Database Sql Vulnerability


Database Assessment Scan Result

Single scan results System Sql Vulnerability Assessment Scan


Database Result

Scan details (summary) User Database Sql Vulnerability


Database Assessment Scans

Scan details (summary) System Sql Vulnerability Assessment Scans


Database

Execute manual scan User Database Sql Vulnerability


Database Assessment Execute Scan
Description Scope API

Execute manual scan System Sql Vulnerability Assessment Execute


Database Scan

VA settings (GET only is supported for User Database Sql Vulnerability


Express Configuration) Database Assessments Settings

VA Settings operations Server Sql Vulnerability Assessments


Settings

Sql Vulnerability Assessments

Using Azure Resource Manager templates

Use the following ARM template to create a new Azure SQL Logical Server with
express configuration for SQL vulnerability assessment.

To configure vulnerability assessment baselines by using Azure Resource Manager


templates, use the
Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines type. Make

sure that vulnerabilityAssessments is enabled before you add baselines.

Here are several examples to how you can set up baselines using ARM templates:

Setup batch baseline based on latest scan results:

JSON

"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines"
,

"apiVersion": "2022-02-01-preview",

"name": "[concat(parameters('serverName'),'/',
parameters('databaseName') , '/default/default')]",

"properties": {

"latestScan": true

Setup batch baseline based on specific results:

JSON

"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines"
,

"apiVersion": "2022-02-01-preview",

"name": "[concat(parameters('serverName'),'/',
parameters('databaseName') , '/default/default')]",

"properties": {

"latestScan": false,

"results": {

"VA2065": [

"FirewallRuleName3",

"62.92.15.67",

"62.92.15.67"

],

"FirewallRuleName4",

"62.92.15.68",

"62.92.15.68"

],

"VA2130": [

"dbo"

Set up baseline for a specific rule:

JSON

"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/
rules",

"apiVersion": "2022-02-01-preview",

"name": "[concat(parameters('serverName'),'/',
parameters('databaseName') , '/default/default/VA1143')]",

"properties": {

"latestScan": false,

"results": [

[ "True" ]

Set up batch baselines on the master database based on latest scan results:

JSON

"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines"
,

"apiVersion": "2022-02-01-preview",

"name": "
[concat(parameters('serverName'),'/master/default/default')]",

"properties": {

"latestScan": true

Using PowerShell
Express configuration isn't supported in PowerShell cmdlets but you can use PowerShell
to invoke the latest vulnerability assessment capabilities using REST API, for example:

Enable express configuration on an Azure SQL Server


Setup baselines based on latest scan results for all databases in an Azure SQL
Server
Express configuration PowerShell commands reference

Using Azure CLI


Invoke express configuration using Azure CLI.

Troubleshooting

Revert back to the classic configuration


To change an Azure SQL database from the express vulnerability assessment
configuration to the classic configuration:

1. Disable the Defender for Azure SQL plan from the Azure portal.

2. Use PowerShell to reconfigure using the classic experience:

PowerShell

Update-AzSqlServerAdvancedThreatProtectionSetting `

-ResourceGroupName "demo-rg" `

-ServerName "dbsrv1" `

-Enable 1

Update-AzSqlServerVulnerabilityAssessmentSetting `

-ResourceGroupName "demo-rg" `

-ServerName "dbsrv1" `

-StorageAccountName "mystorage" `

-RecurringScansInterval Weekly `

-ScanResultsContainerName "vulnerability-assessment"

You may have to tweak Update-AzSqlServerVulnerabilityAssessmentSetting


according to Store Vulnerability Assessment scan results in a storage account
accessible behind firewalls and VNets.

Errors
“Vulnerability Assessment is enabled on this server or one of its underlying databases
with an incompatible version”

Possible causes:

Switching to express configuration failed due to a server policy error.

Solution: Try again to enable the express configuration. If the issue persists, try to
disable the Microsoft Defender for SQL in the Azure SQL resource, select Save,
enable Microsoft Defender for SQL again, and select Save.

Switching to express configuration failed due to a database policy error. Database


policies aren't visible in the Azure portal for Defender for SQL vulnerability
assessment, so we check for them during the validation stage of switching to
express configuration.

Solution: Disable all database policies for the relevant server and then try to switch
to express configuration again.
Cosnider using the provided PowerShell script for
assistance.

Classic configuration

View scan history


Select Scan History in the vulnerability assessment pane to view a history of all scans
previously run on this database.

Disable specific findings from Microsoft Defender for


Cloud (preview)
If you have an organizational need to ignore a finding, rather than remediate it, you can
optionally disable it. Disabled findings don't impact your secure score or generate
unwanted noise.

When a finding matches the criteria you've defined in your disable rules, it won't appear
in the list of findings.
Typical scenarios may include:

Disable findings with medium or lower severity


Disable findings that are non-patchable
Disable findings from benchmarks that aren't of interest for a defined scope

) Important

To disable specific findings, you need permissions to edit a policy in Azure


Policy. Learn more in Azure RBAC permissions in Azure Policy.
Disabled findings will still be included in the weekly SQL vulnerability
assessment email report.
Disabled rules are shown in the "Not applicable" section of the scan results.

To create a rule:

1. From the recommendations detail page for Vulnerability assessment findings on


your SQL servers on machines should be remediated, select Disable rule.

2. Select the relevant scope.

3. Define your criteria. You can use any of the following criteria:

Finding ID
Severity
Benchmarks
4. Select Apply rule. Changes might take up to 24 hrs to take effect.

5. To view, override, or delete a rule:

a. Select Disable rule.

b. From the scope list, subscriptions with active rules show as Rule applied.

c. To view or delete the rule, select the ellipsis menu ("...").

Manage vulnerability assessments programmatically


Azure PowerShell

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

) Important

The PowerShell Azure Resource Manager module is still supported, but all future
development is for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The
arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.

You can use Azure PowerShell cmdlets to programmatically manage your vulnerability
assessments. The supported cmdlets are:

Cmdlet name as a link Description

Clear-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Clears the vulnerability assessment


rule baseline.
First, set the baseline before you
use this cmdlet to clear it.

Clear-AzSqlDatabaseVulnerabilityAssessmentSetting Clears the vulnerability assessment


settings of a database.

Clear- Clears the vulnerability assessment


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline rule baseline of a managed
database.

First, set the baseline before you


use this cmdlet to clear it.

Clear- Clears the vulnerability assessment


AzSqlInstanceDatabaseVulnerabilityAssessmentSetting settings of a managed database.

Clear-AzSqlInstanceVulnerabilityAssessmentSetting Clears the vulnerability assessment


settings of a managed instance.

Convert-AzSqlDatabaseVulnerabilityAssessmentScan Converts vulnerability assessment


scan results of a database to an
Excel file (export).
Cmdlet name as a link Description

Convert- Converts vulnerability assessment


AzSqlInstanceDatabaseVulnerabilityAssessmentScan scan results of a managed
database to an Excel file (export).

Get-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Gets the vulnerability assessment


rule baseline of a database for a
given rule.

Get- Gets the vulnerability assessment


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline rule baseline of a managed
database for a given rule.

Get-AzSqlDatabaseVulnerabilityAssessmentScanRecord Gets all vulnerability assessment


scan records associated with a
given database.

Get- Gets all vulnerability assessment


AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord scan records associated with a
given managed database.

Get-AzSqlDatabaseVulnerabilityAssessmentSetting Returns the vulnerability


assessment settings of a database.

Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting Returns the vulnerability


assessment settings of a managed
database.

Set-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Sets the vulnerability assessment


rule baseline.

Set- Sets the vulnerability assessment


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline rule baseline for a managed
database.

Start-AzSqlDatabaseVulnerabilityAssessmentScan Triggers the start of a vulnerability


assessment scan on a database.

Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan Triggers the start of a vulnerability


assessment scan on a managed
database.

Update-AzSqlDatabaseVulnerabilityAssessmentSetting Updates the vulnerability


assessment settings of a database.

Update- Updates the vulnerability


AzSqlInstanceDatabaseVulnerabilityAssessmentSetting assessment settings of a managed
database.
Cmdlet name as a link Description

Update-AzSqlInstanceVulnerabilityAssessmentSetting Updates the vulnerability


assessment settings of a managed
instance.

For a script example, see Azure SQL vulnerability assessment PowerShell support.

Azure CLI

) Important

The following Azure CLI commands are for SQL databases hosted on VMs or on-
premises machines. For vulnerability assessments regarding Azure SQL Databases,
refer to the Azure portal or PowerShell section.

You can use Azure CLI commands to programmatically manage your vulnerability
assessments. The supported commands are:

Command name as a Description


link

az security va sql baseline Delete SQL vulnerability assessment rule baseline.


delete

az security va sql baseline View SQL vulnerability assessment baseline for all rules.
list

az security va sql baseline Sets SQL vulnerability assessment baseline. Replaces the current
set baseline.

az security va sql baseline View SQL vulnerability assessment rule baseline.


show

az security va sql baseline Update SQL vulnerability assessment rule baseline. Replaces the
update current rule baseline.

az security va sql results View all SQL vulnerability assessment scan results.
list

az security va sql results View SQL vulnerability assessment scan results.


show

az security va sql scans list List all SQL vulnerability assessment scan summaries.

az security va sql scans View SQL vulnerability assessment scan summaries.


show
Resource Manager templates
To configure vulnerability assessment baselines by using Azure Resource Manager
templates, use the
Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines type.

Ensure that you have enabled vulnerabilityAssessments before you add baselines.

Here's an example for defining Baseline Rule VA2065 to master database and VA1143 to
user database as resources in a Resource Manager template:

JSON

"resources": [

"type": "Microsoft.Sql/servers/databases/vulnerabilityAapiVersion":
"2018-06-01",

"name": "[concat(parameters('server_name'),'/',
parameters('database_name') , '/default/VA2065/master')]",

"properties": {

"baselineResults": [

"result": [

"FirewallRuleName3",

"StartIpAddress",

"EndIpAddress"

},

"result": [

"FirewallRuleName4",

"62.92.15.68",

"62.92.15.68"

},

"type": "Microsoft.Sql/servers/databases/vulnerabilityAapiVersion":
"2018-06-01",

"name": "[concat(parameters('server_name'),'/',
parameters('database_name'), '/default/VA2130/Default')]",

"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments',
parameters('server_name'), 'Default')]"

],

"properties": {

"baselineResults": [

"result": [

"dbo"

For master database and user database, the resource names are defined differently:

Master database - "name": "[concat(parameters('server_name'),'/',


parameters('database_name'), '/default/VA2065/master')]",
User database - "name": "[concat(parameters('server_name'),'/',
parameters('database_name'), '/default/VA2065/default')]",

To handle Boolean types as true/false, set the baseline result with binary input like
"1"/"0".

JSON

"type": "Microsoft.Sql/servers/databases/vulnerabilityapiVersion":
"2018-06-01",

"name": "[concat(parameters('server_name'),'/',
parameters('database_name'), '/default/VA1143/Default')]",

"dependsOn": [

"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments',
parameters('server_name'), 'Default')]"

],

"properties": {

"baselineResults": [

"result": [

"1"

Next steps
Learn more about Microsoft Defender for Azure SQL.
Learn more about data discovery and classification.
Learn more about storing vulnerability assessment scan results in a storage
account accessible behind firewalls and VNets.
Check out common questions about Azure SQL databases.
Find and remediate vulnerabilities in
your Azure SQL databases
Article • 05/10/2023

Microsoft Defender for Cloud provides vulnerability assessment for your Azure SQL
databases. Vulnerability assessment scans your databases for software vulnerabilities
and provides a list of findings. You can use the findings to remediate software
vulnerabilities and disable findings.

Prerequisites
Make sure that you know whether you're using the express or classic configurations
before you continue.

To see which configuration you're using:

1. In the Azure portal , open the specific resource in Azure SQL Database, SQL
Managed Instance Database, or Azure Synapse.
2. Under the Security heading, select Defender for Cloud.
3. In the Enablement Status, select Configure to open the Microsoft Defender for
SQL settings pane for either the entire server or managed instance.

If the vulnerability settings show the option to configure a storage account, you're using
the classic configuration. If not, you're using the express configuration.

Find vulnerabilities in your Azure SQL


databases
Express configuration (preview)

Permissions
One of the following permissions is required to see vulnerability assessment results
in the Microsoft Defender for Cloud recommendation SQL databases should have
vulnerability findings resolved:

Security Admin
Security Reader
The following permissions are required to changes vulnerability assessment
settings:

SQL Security Manager

If you're receiving any automated emails with links to scan results the following
permissions are required to access the links about scan results or to view scan
results at the resource-level:

SQL Security Manager

Data residency
SQL vulnerability assessment queries the SQL server using publicly available queries
under Defender for Cloud recommendations for SQL vulnerability assessment, and
stores the query results. SQL vulnerability assessment data is stored in the location
of the logical server it's configured on. For example, if the user enabled vulnerability
assessment on a logical server in West Europe, the results will be stored in West
Europe. This data will be collected only if the SQL vulnerability assessment solution
is configured on the logical server.

On-demand vulnerability scans


You can run SQL vulnerability assessment scans on-demand:

1. From the resource's Defender for Cloud page, select View additional findings
in Vulnerability Assessment to access the scan results from previous scans.
2. To run an on-demand scan to scan your database for vulnerabilities, select
Scan from the toolbar:

7 Note

The scan is lightweight and safe. It takes a few seconds to run and is entirely
read-only. It doesn't make any changes to your database.

Remediate vulnerabilities
When a vulnerability scan completes, the report is displayed in the Azure portal. The
report presents:

An overview of your security state


The number of issues that were found
A summary by severity of the risks
A list of the findings for further investigations
To remediate the vulnerabilities discovered:

1. Review your results and determine which of the report's findings are true
security issues for your environment.

2. Select each failed result to understand its impact and why the security check
failed.

 Tip

The findings details page includes actionable remediation information


explaining how to resolve the issue.

3. As you review your assessment results, you can mark specific results as being
an acceptable baseline in your environment. A baseline is essentially a
customization of how the results are reported. In subsequent scans, results
that match the baseline are considered as passes. After you've established
your baseline security state, vulnerability assessment only reports on
deviations from the baseline. In this way, you can focus your attention on the
relevant issues.
4. Any findings you've added to the baseline will now appear as Passed with an
indication that they've passed because of the baseline changes. There's no
need to run another scan for the baseline to take effect.

Your vulnerability assessment scans can now be used to ensure that your database
maintains a high level of security, and that your organizational policies are met.

Next steps
Learn more about Microsoft Defender for Azure SQL.
Learn more about data discovery and classification.
Learn more about storing vulnerability assessment scan results in a storage
account accessible behind firewalls and VNets.
SQL information protection policy in
Microsoft Defender for Cloud
Article • 04/13/2023

SQL information protection's data discovery and classification mechanism provides


advanced capabilities for discovering, classifying, labeling, and reporting the sensitive
data in your databases. It's built into Azure SQL Database, Azure SQL Managed Instance,
and Azure Synapse Analytics.

The classification mechanism is based on the following two elements:

Labels – The main classification attributes, used to define the sensitivity level of the
data stored in the column.
Information Types – Provides additional granularity into the type of data stored in
the column.

The information protection policy options within Defender for Cloud provide a
predefined set of labels and information types which serve as the defaults for the
classification engine. You can customize the policy, according to your organization's
needs, as described below.

How do I access the SQL information


protection policy?
There are three ways to access the information protection policy:
(Recommended) From the Environment settings page of Defender for Cloud
From the security recommendation "Sensitive data in your SQL databases should
be classified"
From the Azure SQL DB data discovery page

Each of these is shown in the relevant tab below.

From Defender for Cloud's settings

Access the policy from Defender for Cloud's


environment settings page
From Defender for Cloud's Environment settings page, select SQL information
protection.

7 Note

This option only appears for users with tenant-level permissions. Grant tenant-
wide permissions to yourself.

Customize your information types


To manage and customize information types:

1. Select Manage information types.


2. To add a new type, select Create information type. You can configure a name,
description, and search pattern strings for the information type. Search pattern
strings can optionally use keywords with wildcard characters (using the character
'%'), which the automated discovery engine uses to identify sensitive data in your
databases, based on the columns' metadata.

3. You can also modify the built-in types by adding additional search pattern strings,
disabling some of the existing strings, or by changing the description.

 Tip

You can't delete built-in types or change their names.

4. Information types are listed in order of ascending discovery ranking, meaning that
the types higher in the list will attempt to match first. To change the ranking
between information types, drag the types to the right spot in the table, or use the
Move up and Move down buttons to change the order.
5. Select OK when you are done.

6. After you completed managing your information types, be sure to associate the
relevant types with the relevant labels, by clicking Configure for a particular label,
and adding or deleting information types as appropriate.

7. To apply your changes, select Save in the main Labels page.

Exporting and importing a policy


You can download a JSON file with your defined labels and information types, edit the
file in the editor of your choice, and then import the updated file.

7 Note

You'll need tenant level permissions to import a policy file.

Permissions
To customize the information protection policy for your Azure tenant, you'll need the
following actions on the tenant's root management group:

Microsoft.Security/informationProtectionPolicies/read
Microsoft.Security/informationProtectionPolicies/write

Learn more in Grant and request tenant-wide visibility.

Manage SQL information protection using


Azure PowerShell
Get-AzSqlInformationProtectionPolicy: Retrieves the effective tenant SQL
information protection policy.
Set-AzSqlInformationProtectionPolicy: Sets the effective tenant SQL information
protection policy.

Next steps
In this article, you learned about defining an information protection policy in Microsoft
Defender for Cloud. To learn more about using SQL Information Protection to classify
and protect sensitive data in your SQL databases, see Azure SQL Database Data
Discovery and Classification.

For more information on security policies and data security in Defender for Cloud, see
the following articles:

Setting security policies in Microsoft Defender for Cloud: Learn how to configure
security policies for your Azure subscriptions and resource groups
Microsoft Defender for Cloud data security: Learn how Defender for Cloud
manages and safeguards data
SQL vulnerability assessment rules
reference guide
Article • 12/29/2022

This article lists the set of built-in rules that are used to flag security vulnerabilities and
highlight deviations from best practices, such as misconfigurations and excessive
permissions. The rules are based on Microsoft's best practices and focus on the security
issues that present the biggest risks to your database and its valuable data. They cover
both database-level issues as well as server-level security issues, like server firewall
settings and server-level permissions. These rules also represent many of the
requirements from various regulatory bodies to meet their compliance standards.

Applies to:
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse
Analytics
SQL Server (all supported versions)

The rules shown in your database scans depend on the SQL version and platform that
was scanned.

To learn about how to implement vulnerability assessment in Azure, see Implement


vulnerability assessment.

For a list of changes to these rules, see SQL vulnerability assessment rules changelog.

Rule categories
SQL vulnerability assessment rules have five categories, which are in the following
sections:

Authentication and Authorization


Auditing and Logging
Data Protection
Installation Updates and Patches
Surface Area Reduction

1 SQL Server 2012+ refers to all versions of SQL Server 2012 and above.

2 SQL Server 2017+ refers to all versions of SQL Server 2017 and above.

3 SQL Server 2016+ refers to all versions of SQL Server 2016 and above.

Authentication and Authorization


Rule ID Rule Title Rule Rule Description Platform
Severity

VA1017 Execute permissions on High The xp_cmdshell extended stored SQL


xp_cmdshell from all procedure spawns a Windows Server
users (except dbo) command shell, passing in a string for 2012+1
should be revoked execution. This rule checks that no
users (other than users with the
CONTROL SERVER permission like
members of the sysadmin server role)
have permission to execute the
xp_cmdshell extended stored
procedure.

VA1020 Database user GUEST High The guest user permits access to a SQL
should not be a database for any logins that are not Server
member of any role mapped to a specific database user. 2012+

This rule checks that no database


roles are assigned to the Guest user. SQL
Database

VA1042 Database ownership High Cross database ownership chaining is SQL


chaining should be an extension of ownership chaining, Server
disabled for all except it does cross the database 2012+

databases except for boundary. This rule checks that this


master , msdb , and option is disabled for all databases SQL
tempdb except for master , msdb , and tempdb . Managed
For master , msdb , and tempdb , cross Instance
database ownership chaining is
enabled by default.

VA1043 Principal GUEST should Medium The guest user permits access to a SQL
not have access to any database for any logins that are not Server
user database mapped to a specific database user. 2012+

This rule checks that the guest user


cannot connect to any database. SQL
Managed
Instance

VA1046 CHECK_POLICY should Low CHECK_POLICY option enables SQL


be enabled for all SQL verifying SQL logins against the Server
logins domain policy. This rule checks that 2012+

CHECK_POLICY option is enabled for


all SQL logins. SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1047 Password expiration Low Password expiration policies are used SQL
check should be to manage the lifespan of a password. Server
enabled for all SQL When SQL Server enforces password 2012+

logins expiration policy, users are reminded


to change old passwords, and SQL
accounts that have expired passwords Managed
are disabled. This rule checks that Instance
password expiration policy is enabled
for all SQL logins.

VA1048 Database principals High A database principal that is mapped SQL


should not be mapped to the sa account can be exploited by Server
to the sa account an attacker to elevate permissions to 2012+

sysadmin
SQL
Managed
Instance

VA1052 Remove Low The BUILTIN\Administrators group SQL


BUILTIN\Administrators contains the Windows Local Server
as a server login Administrators group. In older 2012+
versions of Microsoft SQL Server this
group has administrator rights by
default. This rule checks that this
group is removed from SQL Server.

VA1053 Account with default Low sa is a well-known account with SQL


name sa should be principal ID 1. This rule verifies that Server
renamed or disabled the sa account is either renamed or 2012+

disabled.
SQL
Managed
Instance

VA1054 Excessive permissions Low Every SQL Server login belongs to the SQL
should not be granted public server role. When a server Server
to PUBLIC role on principal has not been granted or 2012+

objects or columns denied specific permissions on a


securable object the user inherits the SQL
permissions granted to public on that Database
object. This rule displays a list of all
securable objects or columns that are
accessible to all users through the
PUBLIC role.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1058 sa login should be High sa is a well-known account with SQL


disabled principal ID 1. This rule verifies that Server
the sa account is disabled. 2012+

SQL
Managed
Instance

VA1059 xp_cmdshell should be High xp_cmdshell spawns a Windows SQL


disabled command shell and passes it a string Server
for execution. This rule checks that 2012+

xp_cmdshell is disabled.
SQL
Managed
Instance

VA1067 Database Mail XPs Medium This rule checks that Database Mail is SQL
should be disabled disabled when no database mail Server
when it is not in use profile is configured. Database Mail 2012+
can be used for sending e-mail
messages from the SQL Server
Database Engine and is disabled by
default. If you are not using this
feature, it is recommended to disable
it to reduce the surface area.

VA1068 Server permissions Low Server level permissions are SQL


shouldn't be granted associated with a server level object Server
directly to principals to regulate which users can gain 2012+

access to the object. This rule checks


that there are no server level SQL
permissions granted directly to logins. Managed
Instance

VA1070 Database users Low Database users may share the same SQL
shouldn't share the name as a server login. This rule Server
same name as a server validates that there are no such users. 2012+

login
SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1072 Authentication mode Medium There are two possible authentication SQL
should be Windows modes: Windows Authentication Server
Authentication mode and mixed mode. Mixed mode 2012+
means that SQL Server enables both
Windows authentication and SQL
Server authentication. This rule checks
that the authentication mode is set to
Windows Authentication.

VA1094 Database permissions Low Permissions are rules associated with SQL
shouldn't be granted a securable object to regulate which Server
directly to principals users can gain access to the object. 2012+

This rule checks that there are no DB


permissions granted directly to users. SQL
Managed
Instance

VA1095 Excessive permissions Medium Every SQL Server login belongs to the SQL
should not be granted public server role. When a server Server
to PUBLIC role principal has not been granted or 2012+

denied specific permissions on a


securable object the user inherits the SQL
permissions granted to public on that Managed
object. This displays a list of all Instance

permissions that are granted to the


PUBLIC role. SQL
Database

VA1096 Principal GUEST should Low Each database includes a user called SQL
not be granted GUEST. Permissions granted to GUEST Server
permissions in the are inherited by users who have 2012+

database access to the database but who do


not have a user account in the SQL
database. This rule checks that all Managed
permissions have been revoked from Instance

the GUEST user.


SQL
Database
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1097 Principal GUEST should Low Each database includes a user called SQL
not be granted GUEST. Permissions granted to GUEST Server
permissions on objects are inherited by users who have 2012+

or columns access to the database but who do


not have a user account in the SQL
database. This rule checks that all Managed
permissions have been revoked from Instance

the GUEST user.


SQL
Database

VA1099 GUEST user should not Low Each database includes a user called SQL
be granted permissions GUEST. Permissions granted to GUEST Server
on database securables are inherited by users who have 2012+

access to the database but who do


not have a user account in the SQL
database. This rule checks that all Managed
permissions have been revoked from Instance

the GUEST user.


SQL
Database

VA1246 Application roles Low An application role is a database SQL


should not be used principal that enables an application Server
to run with its own user-like 2012+

permissions. Application roles enable


that only users connecting through a SQL
particular application can access Managed
specific data. Application roles are Instance

password-based (which applications


typically hardcode) and not SQL
permission based which exposes the Database
database to app role impersonation
by password-guessing. This rule
checks that no application roles are
defined in the database.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1248 User-defined database Medium To easily manage the permissions in SQL


roles should not be your databases SQL Server provides Server
members of fixed roles several roles, which are security 2012+

principals that group other principals.


They are like groups in the Microsoft SQL
Windows operating system. Database Managed
accounts and other SQL Server roles Instance

can be added into database-level


roles. Each member of a fixed- SQL
database role can add other users to Database

that same role. This rule checks that


no user-defined roles are members of Azure
fixed roles. Synapse

VA1267 Contained users should Medium Contained users are users that exist SQL
use Windows within the database and do not Server
Authentication require a login mapping. This rule 2012+

checks that contained users use


Windows Authentication. SQL
Managed
Instance

VA1280 Server Permissions Medium Every SQL Server login belongs to the SQL
granted to public public server role. When a server Server
should be minimized principal has not been granted or 2012+

denied specific permissions on a


securable object the user inherits the SQL
permissions granted to public on that Managed
object. This rule checks that server Instance
permissions granted to public are
minimized.

VA1282 Orphan roles should be Low Orphan roles are user-defined roles SQL
removed that have no members. Eliminate Server
orphaned roles as they are not 2012+

needed on the system. This rule


checks whether there are any orphan SQL
roles. Managed
Instance

SQL
Database

Azure
Synapse
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2020 Minimal set of High Every SQL Server securable has SQL
principals should be permissions associated with it that Server
granted ALTER or can be granted to principals. 2012+

ALTER ANY USER Permissions can be scoped at the


database-scoped server level (assigned to logins and SQL
permissions server roles) or at the database level Managed
(assigned to database users and Instance

database roles). These rules check


that only a minimal set of principals SQL
are granted ALTER or ALTER ANY Database

USER database-scoped permissions.


Azure
Synapse

VA2033 Minimal set of Low This rule checks which principals are SQL
principals should be granted EXECUTE permission on Server
granted database- objects or columns to ensure this 2012+

scoped EXECUTE permission is granted to a minimal set


permission on objects of principals. Every SQL Server SQL
or columns securable has permissions associated Managed
with it that can be granted to Instance

principals. Permissions can be scoped


at the server level (assigned to logins SQL
and server roles) or at the database Database

level (assigned to database users,


database roles, or application roles). Azure
The EXECUTE permission applies to Synapse
both stored procedures and scalar
functions, which can be used in
computed columns.

VA2103 Unnecessary execute Medium Extended stored procedures are DLLs SQL
permissions on that an instance of SQL Server can Server
extended stored dynamically load and run. SQL Server 2012+

procedures should be is packaged with many extended


revoked stored procedures that allow for SQL
interaction with the system DLLs. This Managed
rule checks that unnecessary execute Instance
permissions on extended stored
procedures have been revoked.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2107 Minimal set of High SQL Database provides two restricted SQL
principals should be administrative roles in the master Database

members of fixed database to which user accounts can


Azure SQL DB master be added that grant permissions to Azure
database roles either create databases or manage Synapse
logins. This rule check that a minimal
set of principals are members of these
administrative roles.

VA2108 Minimal set of High SQL Server provides roles to help SQL
principals should be manage the permissions. Roles are Server
members of fixed high security principals that group other 2012+

impact database roles principals. Database-level roles are


database-wide in their permission SQL
scope. This rule checks that a minimal Managed
set of principals are members of the Instance

fixed database roles.


SQL
Database

Azure
Synapse

VA2109 Minimal set of Low SQL Server provides roles to help SQL
principals should be manage the permissions. Roles are Server
members of fixed low security principals that group other 2012+

impact database roles principals. Database-level roles are


database-wide in their permission SQL
scope. This rule checks that a minimal Managed
set of principals are members of the Instance

fixed database roles.


SQL
Database

Azure
Synapse
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2110 Execute permissions to High Registry extended stored procedures SQL


access the registry allow Microsoft SQL Server to read Server
should be revoked write and enumerate values and keys 2012+

in the registry. They are used by


Enterprise Manager to configure the SQL
server. This rule checks that the Managed
permissions to execute registry Instance
extended stored procedures have
been revoked from all users (other
than dbo).

VA2113 Data Transformation Medium Data Transformation Services (DTS), is SQL


Services (DTS) a set of objects and utilities that allow Server
permissions should the automation of extract, transform, 2012+

only be granted to SSIS and load operations to or from a


roles database. The objects are DTS SQL
packages and their components, and Managed
the utilities are called DTS tools. This Instance
rule checks that only the SSIS roles
are granted permissions to use the
DTS system stored procedures and
the permissions for the PUBLIC role to
use the DTS system stored procedures
have been revoked.

VA2114 Minimal set of High SQL Server provides roles to help SQL
principals should be manage permissions. Roles are Server
members of high security principals that group other 2012+

impact fixed server principals. Server-level roles are


roles server-wide in their permission scope. SQL
This rule checks that a minimal set of Managed
principals are members of the fixed Instance
server roles.

VA2129 Changes to signed High You can sign a stored procedure, SQL
modules should be function, or trigger with a certificate Server
authorized or an asymmetric key. This is 2012+

designed for scenarios when


permissions cannot be inherited SQL
through ownership chaining or when Database

the ownership chain is broken, such


as dynamic SQL. This rule checks for SQL
changes made to signed modules, Managed
which could be an indication of Instance
malicious use.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2130 Track all users with Low This check tracks all users with access SQL
access to the database to a database. Make sure that these Database

users are authorized according to


their current role in the organization. Azure
Synapse

VA2201 SQL logins with High This rule checks the accounts with SQL
commonly used names database owner permission for Server
should be disabled commonly used names. Assigning 2012+
commonly used names to accounts
with database owner permission
increases the likelihood of successful
brute force attacks.

Auditing and Logging


Rule ID Rule Title Rule Rule Description Platform
Severity

VA1045 Default trace Medium Default trace provides troubleshooting SQL


should be assistance to database administrators by Server
enabled ensuring that they have the log data 2012+

necessary to diagnose problems the first


time they occur. This rule checks that the SQL
default trace is enabled. Managed
Instance

VA1091 Auditing of both Low SQL Server Login auditing configuration SQL
successful and enables administrators to track the users Server
failed login logging into SQL Server instances. If the user 2012+
attempts chooses to count on 'Login auditing' to track
(default trace) users logging into SQL Server instances,
should be then it is important to enable it for both
enabled when successful and failed login attempts.
'Login auditing'
is set up to track
logins

VA1093 Maximum Low Each SQL Server Error log will have all the SQL
number of error information related to failures / errors that Server
logs should be have occurred since SQL Server was last 2012+
12 or more restarted or since the last time you have
recycled the error logs. This rule checks that
the maximum number of error logs is 12 or
more.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1258 Database High Database owners can perform all SQL


owners are as configuration and maintenance activities on Server
expected the database and can also drop databases in 2016+3

SQL Server. Tracking database owners is


important to avoid having excessive SQL
permission for some principals. Create a Database

baseline that defines the expected database


owners for the database. This rule checks Azure
whether the database owners are as defined Synapse
in the baseline.

VA1264 Auditing of both Low SQL Server auditing configuration enables SQL
successful and administrators to track the users logging Server
failed login into SQL Server instances that they're 2012+

attempts should responsible for. This rule checks that


be enabled auditing is enabled for both successful and SQL
failed login attempts. Managed
Instance

VA1265 Auditing of both Medium SQL Server auditing configuration enables SQL
successful and administrators to track users logging to SQL Server
failed login Server instances that they're responsible for. 2012+

attempts for This rule checks that auditing is enabled for


contained DB both successful and failed login attempts for SQL
authentication contained DB authentication. Managed
should be Instance
enabled

VA1281 All memberships Medium User-defined roles are security principals SQL
for user-defined defined by the user to group principals to Server
roles should be easily manage permissions. Monitoring 2012+

intended these roles is important to avoid having


excessive permissions. Create a baseline that SQL
defines expected membership for each user- Managed
defined role. This rule checks whether all Instance

memberships for user-defined roles are as


defined in the baseline. SQL
Database

Azure
Synapse
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1283 There should be Low Auditing an instance of the SQL Server SQL
at least 1 active Database Engine or an individual database Server
audit in the involves tracking and logging events that 2012+

system occur on the Database Engine. The SQL


Server Audit object collects a single instance SQL
of server or database-level actions and Managed
groups of actions to monitor. This rule Instance
checks that there is at least one active audit
in the system.

VA2061 Auditing should High Azure SQL Database Auditing tracks SQL
be enabled at database events and writes them to an audit Database

the server level log in your Azure storage account. Auditing


helps you understand database activity and Azure
gain insight into discrepancies and Synapse
anomalies that could indicate business
concerns or suspected security violations as
well as helps you meet regulatory
compliance. For more information, see Azure
SQL Auditing. This rule checks that auditing
is enabled.

Data Protection
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1098 Any Existing High Service Broker and Mirroring endpoints SQL
SSB or support different encryption algorithms Server
Mirroring including no-encryption. This rule checks that 2012+
endpoint any existing endpoint requires AES
should require encryption.
AES connection
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1219 Transparent Medium Transparent data encryption (TDE) helps to SQL


data protect the database files against information Server
encryption disclosure by performing real-time encryption 2012+

should be and decryption of the database, associated


enabled backups, and transaction log files 'at rest', SQL
without requiring changes to the application. Managed
This rule checks that TDE is enabled on the Instance

database.
SQL
Database

Azure
Synapse

VA1220 Database High Microsoft SQL Server can use Secure Sockets SQL
communication Layer (SSL) or Transport Layer Security (TLS) to Server
using TDS encrypt data that is transmitted across a 2012+

should be network between an instance of SQL Server


protected and a client application. This rule checks that SQL
through TLS all connections to the SQL Server are Managed
encrypted through TLS. Instance

VA1221 Database High SQL Server uses encryption keys to help SQL
Encryption secure data credentials and connection Server
Symmetric information that is stored in a server 2012+

Keys should database. SQL Server has two kinds of keys:


use AES symmetric and asymmetric. This rule checks SQL
algorithm that Database Encryption Symmetric Keys use Managed
AES algorithm. Instance

SQL
Database

Azure
Synapse

VA1222 Cell-Level High Cell-Level Encryption (CLE) allows you to SQL


Encryption encrypt your data using symmetric and Server
keys should asymmetric keys. This rule checks that Cell- 2012+

use AES Level Encryption symmetric keys use AES


algorithm algorithm. SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1223 Certificate keys High Certificate keys are used in RSA and other SQL
should use at encryption algorithms to protect data. These Server
least 2048 bits keys need to be of enough length to secure 2012+

the user's data. This rule checks that the key's


length is at least 2048 bits for all certificates. SQL
Managed
Instance

SQL
Database

Azure
Synapse

VA1224 Asymmetric High Database asymmetric keys are used in many SQL
keys' length encryption algorithms these keys need to be Server
should be at of enough length to secure the encrypted 2012

least 2048 bits data this rule checks that all asymmetric keys
stored in the database are of length of at SQL
least 2048 bits Server
2014

SQL
Database

VA1279 Force High When the Force Encryption option for the SQL
encryption Database Engine is enabled all Server
should be communications between client and server is 2012+
enabled for encrypted regardless of whether the 'Encrypt
TDS connection' option (such as from SSMS) is
checked or not. This rule checks that Force
Encryption option is enabled.

VA2060 SQL Threat Medium SQL Threat Detection provides a layer of


Detection security that detects potential vulnerabilities SQL
should be and anomalous activity in databases such as Managed
enabled at the SQL injection attacks and unusual behavior Instance

server level patterns. When a potential threat is detected


Threat Detection sends an actionable real- SQL
time alert by email and in Microsoft Defender Database

for Cloud, which includes clear investigation


and remediation steps for the specific threat. Azure
For more information, please see Configure Synapse
threat detection. This check verifies that SQL
Threat Detection is enabled
Installation Updates and Patches
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1018 Latest High Microsoft periodically releases Cumulative SQL


updates Updates (CUs) for each version of SQL Server. Server
should be This rule checks whether the latest CU has 2005

installed been installed for the particular version of SQL


Server being used, by passing in a string for SQL
execution. This rule checks that all users Server
(except dbo) do not have permission to 2008

execute the xp_cmdshell extended stored


procedure. SQL
Server
2008

SQL
Server
2012

SQL
Server
2014

SQL
Server
2016

SQL
Server
2017

VA2128 Vulnerability High To run a vulnerability assessment scan on your SQL


assessment is SQL Server the server needs to be upgraded to Server
not SQL Server 2012 or higher, SQL Server 2008 R2 2012+

supported for and below are no longer supported by


SQL Server Microsoft. For more information, see SQL
versions lower Managed
than SQL Instance

Server 2012
SQL
Database

Azure
Synapse
Surface Area Reduction
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1022 Ad hoc Medium Ad hoc distributed queries use the OPENROWSET SQL
distributed and OPENDATASOURCE functions to connect to Server
queries remote data sources that use OLE DB. This rule 2012+
should be checks that ad hoc distributed queries are
disabled disabled.

VA1023 CLR should High The CLR allows managed code to be hosted by SQL
be disabled and run in the Microsoft SQL Server Server
environment. This rule checks that CLR is 2012+
disabled.

VA1026 CLR should Medium The CLR allows managed code to be hosted by SQL
be disabled and run in the Microsoft SQL Server Server
environment. CLR strict security treats SAFE and 2017+2

EXTERNAL_ACCESS assemblies as if they were


marked UNSAFE and requires all assemblies be SQL
signed by a certificate or asymmetric key with a Managed
corresponding login that has been granted Instance
UNSAFE ASSEMBLY permission in the master
database. This rule checks that CLR is disabled.

VA1027 Untracked High Assemblies marked as UNSAFE are required to SQL


trusted be signed by a certificate or asymmetric key Server
assemblies with a corresponding login that has been 2017+

should be granted UNSAFE ASSEMBLY permission in the


removed master database. Trusted assemblies may SQL
bypass this requirement. Managed
Instance

VA1044 Remote Medium This rule checks that remote dedicated admin SQL
Admin connections are disabled if they are not being Server
Connections used for clustering to reduce attack surface 2012+

should be area. SQL Server provides a dedicated


disabled administrator connection (DAC). The DAC lets SQL
unless an administrator access a running server to Managed
specifically execute diagnostic functions or Transact-SQL Instance
required statements, or to troubleshoot problems on the
server and it becomes an attractive target to
attack when it is enabled remotely.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1051 AUTO_CLOSE Medium The AUTO_CLOSE option specifies whether the SQL
should be database shuts down gracefully and frees Server
disabled on resources after the last user disconnects. 2012+
all databases Regardless of its benefits it can cause denial of
service by aggressively opening and closing the
database, thus it is important to keep this
feature disabled. This rule checks that this
option is disabled on the current database.

VA1066 Unused Low Service Broker provides queuing and reliable SQL
service messaging for SQL Server. Service Broker is Server
broker used both for applications that use a single SQL 2012+
endpoints Server instance and applications that distribute
should be work across multiple instances. Service Broker
removed endpoints provide options for transport security
and message forwarding. This rule enumerates
all the service broker endpoints. Remove those
that are not used.

VA1071 'Scan for Medium When 'Scan for startup procs' is enabled SQL SQL
startup Server scans for and runs all automatically run Server
stored stored procedures defined on the server. If this 2012+
procedures' option is enabled SQL Server scans for and runs
option all automatically run stored procedures defined
should be on the server. This rule checks that this option is
disabled disabled.

VA1092 SQL Server Low SQL Server uses the SQL Server Browser service SQL
instance to enumerate instances of the Database Engine Server
shouldn't be installed on the computer. This enables client 2012+
advertised by applications to browse for a server and helps
the SQL clients distinguish between multiple instances
Server of the Database Engine on the same computer.
Browser This rule checks that the SQL instance is hidden.
service

VA1102 The High The TRUSTWORTHY database property is used SQL


Trustworthy to indicate whether the instance of SQL Server Server
bit should be trusts the database and the contents within it. If 2012+

disabled on this option is enabled database modules (for


all databases example user-defined functions or stored SQL
except MSDB procedures) that use an impersonation context Managed
can access resources outside the database. This Instance
rule verifies that the TRUSTWORTHY bit is
disabled on all databases except MSDB.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1143 'dbo' user Medium The 'dbo' or database owner is a user account SQL
should not that has implied permissions to perform all Server
be used for activities in the database. Members of the 2012+

normal sysadmin fixed server role are automatically


service mapped to dbo. This rule checks that dbo is not SQL
operation the only account allowed to access this Managed
database. Note that on a newly created clean Instance

database this rule will fail until additional roles


are created. SQL
Database

Azure
Synapse

VA1144 Model Medium The Model database is used as the template for SQL
database all databases created on the instance of SQL Server
should only Server. Modifications made to the model 2012+

be accessible database such as database size recovery model


by 'dbo' and other database options are applied to any SQL
databases created afterward. This rule checks Managed
that dbo is the only account allowed to access Instance
the model database.

VA1230 Filestream High FILESTREAM integrates the SQL Server Database SQL
should be Engine with an NTFS file system by storing Server
disabled varbinary (max) binary large object (BLOB) data 2012+
as files on the file system. Transact-SQL
statements can insert, update, query, search,
and back up FILESTREAM data. Enabling
Filestream on SQL server exposes additional
NTFS streaming API, which increases its attack
surface and makes it prone to malicious attacks.
This rule checks that Filestream is disabled.

VA1235 Server Medium Disable the deprecated server configuration SQL


configuration 'Replication XPs' to limit the attack surface area. Server
'Replication This is an internal only configuration setting. 2012+

XPs' should
be disabled SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity

VA1244 Orphaned Medium A database user that exists on a database but SQL
users should has no corresponding login in the master Server
be removed database or as an external resource (for 2012+

from SQL example, a Windows user) is referred to as an


server orphaned user and it should either be removed SQL
databases or remapped to a valid login. This rule checks Managed
that there are no orphaned users. Instance

VA1245 The dbo High There is redundant information about the dbo SQL
information identity for any database: metadata stored in Server
should be the database itself and metadata stored in 2012+

consistent master DB. This rule checks that this information


between the is consistent between the target DB and master. SQL
target DB Managed
and master Instance

VA1247 There should High When SQL Server has been configured to 'scan SQL
be no SPs for startup procs' the server will scan master DB Server
marked as for stored procedures marked as auto-start. This 2012+
auto-start rule checks that there are no SPs marked as
auto-start.

VA1256 User CLR High CLR assemblies can be used to execute arbitrary SQL
assemblies code on SQL Server process. This rule checks Server
should not that there are no user-defined CLR assemblies 2012+

be defined in in the database.


the database SQL
Managed
Instance

VA1277 Polybase High PolyBase is a technology that accesses and SQL


network combines both non-relational and relational Server
encryption data all from within SQL Server. Polybase 2016+
should be network encryption option configures SQL
enabled Server to encrypt control and data channels
when using Polybase. This rule verifies that this
option is enabled.

VA1278 Create a Medium The SQL Server Extensible Key Management SQL
baseline of (EKM) enables third-party EKM / Hardware Server
External Key Security Modules (HSM) vendors to register 2012+

Management their modules in SQL Server. When registered


Providers SQL Server users can use the encryption keys SQL
stored on EKM modules,this rule displays a list Managed
of EKM providers being used in the system. Instance
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2062 Database- High The Azure SQL Database-level firewall helps SQL
level firewall protect your data by preventing all access to Database

rules should your database until you specify which IP


not grant addresses have permission. Database-level Azure
excessive firewall rules grant access to the specific Synapse
access database based on the originating IP address of
each request. Database-level firewall rules for
master and user databases can only be created
and managed through Transact-SQL (unlike
server-level firewall rules, which can also be
created and managed using the Azure portal or
PowerShell). For more information, see Azure
SQL Database and Azure Synapse Analytics IP
firewall rules. This check verifies that database-
level firewall rules do not grant access to more
than 255 IP addresses.

VA2063 Server-level High The Azure SQL server-level firewall helps protect SQL
firewall rules your server by preventing all access to your Database

should not databases until you specify which IP addresses


grant have permission. Server-level firewall rules grant Azure
excessive access to all databases that belong to the server Synapse
access based on the originating IP address of each
request. Server-level firewall rules can only be
created and managed through Transact-SQL as
well as through the Azure portal or PowerShell.
For more information, see Azure SQL Database
and Azure Synapse Analytics IP firewall rules.
This check verifies that server-level firewall rules
do not grant access to more than 255 IP
addresses.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2064 Database- High The Azure SQL Database-level firewall helps SQL
level firewall protect your data by preventing all access to Database

rules should your database until you specify which IP


be tracked addresses have permission. Database-level Azure
and firewall rules grant access to the specific Synapse
maintained database based on the originating IP address of
at a strict each request. Database-level firewall rules for
minimum master and user databases can only be created
and managed through Transact-SQL (unlike
server-level firewall rules, which can also be
created and managed using the Azure portal or
PowerShell). For more information, see Azure
SQL Database and Azure Synapse Analytics IP
firewall rules. This check enumerates all the
database-level firewall rules so that any changes
made to them can be identified and addressed.

VA2065 Server-level High The Azure SQL server-level firewall helps protect SQL
firewall rules your data by preventing all access to your Database

should be databases until you specify which IP addresses


tracked and have permission. Server-level firewall rules grant Azure
maintained access to all databases that belong to the server Synapse
at a strict based on the originating IP address of each
minimum request. Server-level firewall rules can be
created and managed through Transact-SQL as
well as through the Azure portal or PowerShell.
For more information, see Azure SQL Database
and Azure Synapse Analytics IP firewall rules.
This check enumerates all the server-level
firewall rules so that any changes made to them
can be identified and addressed.

VA2111 Sample Low Microsoft SQL Server comes shipped with SQL
databases several sample databases. This rule checks Server
should be whether the sample databases have been 2012+

removed removed.
SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2120 Features that High SQL Server is capable of providing a wide range SQL
may affect of features and services. Some of the features Server
security and services provided by default may not be 2012+

should be necessary and enabling them could adversely


disabled affect the security of the system. This rule SQL
checks that these features are disabled. Managed
Instance

VA2121 'OLE High SQL Server is capable of providing a wide range SQL
Automation of features and services. Some of the features Server
Procedures' and services, provided by default, may not be 2012+

feature necessary, and enabling them could adversely


should be affect the security of the system. The OLE SQL
disabled Automation Procedures option controls Managed
whether OLE Automation objects can be Instance
instantiated within Transact-SQL batches. These
are extended stored procedures that allow SQL
Server users to execute functions external to
SQL Server. Regardless of its benefits it can also
be used for exploits, and is known as a popular
mechanism to plant files on the target
machines. It is advised to use PowerShell as a
replacement for this tool. This rule checks that
'OLE Automation Procedures' feature is
disabled.

VA2122 'User Medium SQL Server is capable of providing a wide range SQL
Options' of features and services. Some of the features Server
feature and services provided by default may not be 2012+

should be necessary and enabling them could adversely


disabled affect the security of the system. The user SQL
options specifies global defaults for all users. A Managed
list of default query processing options is Instance
established for the duration of a user's work
session. The user options allows you to change
the default values of the SET options (if the
server's default settings are not appropriate).
This rule checks that 'user options' feature is
disabled.
Rule ID Rule Title Rule Rule Description Platform
Severity

VA2126 Extensibility- Medium SQL Server provides a wide range of features SQL
features that and services. Some of the features and services, Server
may affect provided by default, may not be necessary, and 2016+
security enabling them could adversely affect the
should be security of the system. This rule checks that
disabled if configurations that allow extraction of data to
not needed an external data source and the execution of
scripts with certain remote language extensions
are disabled.

Removed rules
Rule ID Rule Title

VA1021 Global temporary stored procedures should be removed

VA1024 C2 Audit Mode should be enabled

VA1069 Permissions to select from system tables and views should be revoked from non-
sysadmins

VA1090 Ensure all Government Off The Shelf (GOTS) and Custom Stored Procedures are
encrypted

VA1103 Use only CLR with SAFE_ACCESS permission

VA1229 Filestream setting in registry and in SQL Server configuration should match

VA1231 Filestream should be disabled (SQL)

VA1234 Common Criteria setting should be enabled

VA1252 List of events being audited and centrally managed via server audit specifications.

VA1253 List of DB-scoped events being audited and centrally managed via server audit
specifications

VA1263 List all the active audits in the system

VA1266 The 'MUST_CHANGE' option should be set on all SQL logins

VA1276 Agent XPs feature should be disabled

VA1286 Database permissions shouldn't be granted directly to principals (OBJECT or COLUMN)

VA2000 Minimal set of principals should be granted high impact database-scoped permissions
Rule ID Rule Title

VA2001 Minimal set of principals should be granted high impact database-scoped permissions
on objects or columns

VA2002 Minimal set of principals should be granted high impact database-scoped permissions
on various securables

VA2010 Minimal set of principals should be granted medium impact database-scoped


permissions

VA2021 Minimal set of principals should be granted database-scoped ALTER permissions on


objects or columns

VA2022 Minimal set of principals should be granted database-scoped ALTER permission on


various securables

VA2030 Minimal set of principals should be granted database-scoped SELECT or EXECUTE


permissions

VA2031 Minimal set of principals should be granted database-scoped SELECT

VA2032 Minimal set of principals should be granted database-scoped SELECT or EXECUTE


permissions on schema

VA2034 Minimal set of principals should be granted database-scoped EXECUTE permission on


XML Schema Collection

VA2040 Minimal set of principals should be granted low impact database-scoped permissions

VA2041 Minimal set of principals should be granted low impact database-scoped permissions
on objects or columns

VA2042 Minimal set of principals should be granted low impact database-scoped permissions
on schema

VA2050 Minimal set of principals should be granted database-scoped VIEW DEFINITION


permissions

VA2051 Minimal set of principals should be granted database-scoped VIEW DEFINITION


permissions on objects or columns

VA2052 Minimal set of principals should be granted database-scoped VIEW DEFINITION


permission on various securables

VA2100 Minimal set of principals should be granted high impact server-scoped permissions

VA2101 Minimal set of principals should be granted medium impact server-scoped permissions

VA2102 Minimal set of principals should be granted low impact server-scoped permissions

VA2104 Execute permissions on extended stored procedures should be revoked from PUBLIC
Rule ID Rule Title

VA2105 Login password should not be easily guessed

VA2112 Permissions from PUBLIC for Data Transformation Services (DTS) should be revoked

VA2115 Minimal set of principals should be members of medium impact fixed server roles

VA2123 'Remote Access' feature should be disabled

VA2127 'External Scripts' feature should be disabled

Next steps
Vulnerability assessment
SQL vulnerability assessment rules changelog
SQL vulnerability assessment rules
changelog
Article • 12/29/2022

This article details the changes made to the SQL vulnerability assessment service rules.
Rules that are updated, removed, or added will be outlined below. For an updated list of
SQL vulnerability assessment rules, see SQL vulnerability assessment rules.

June 2022
Rule ID Rule Title Change details

VA2129 Changes to signed modules should be authorized Logic change

VA1219 Transparent data encryption should be enabled Logic change

VA1047 Password expiration check should be enabled for all SQL logins Logic change

January 2022
Rule ID Rule Title Change
details

VA1288 Sensitive data columns should be classified Removed


rule

VA1054 Minimal set of principals should be members of fixed high impact Logic change
database roles

VA1220 Database communication using TDS should be protected through TLS Logic change

VA2120 Features that may affect security should be disabled Logic change

VA2129 Changes to signed modules should be authorized Logic change

June 2021
Rule ID Rule Title Change
details

VA1220 Database communication using TDS should be protected through TLS Logic change
Rule ID Rule Title Change
details

VA2108 Minimal set of principals should be members of fixed high impact Logic change
database roles

December 2020
Rule ID Rule Title Change details

VA1017 Execute permissions on xp_cmdshell from all users (except dbo) Title and
should be revoked description
change

VA1021 Global temporary stored procedures should be removed Removed rule

VA1024 C2 Audit Mode should be enabled Removed rule

VA1042 Database ownership chaining should be disabled for all databases Description
except for master , msdb , and tempdb change

VA1044 Remote Admin Connections should be disabled unless specifically Title and
required description
change

VA1047 Password expiration check should be enabled for all SQL logins Title and
description
change

VA1051 AUTO_CLOSE should be disabled on all databases Description


change

VA1053 Account with default name 'sa' should be renamed or disabled Description
change

VA1067 Database Mail XPs should be disabled when it is not in use Title and
description
change

VA1068 Server permissions shouldn't be granted directly to principals Logic change

VA1069 Permissions to select from system tables and views should be Removed rule
revoked from non-sysadmins

VA1090 Ensure all Government Off The Shelf (GOTS) and Custom Stored Removed rule
Procedures are encrypted

VA1091 Auditing of both successful and failed login attempts (default trace) Description
should be enabled when 'Login auditing' is set up to track logins change
Rule ID Rule Title Change details

VA1098 Any Existing SSB or Mirroring endpoint should require AES Logic change
connection

VA1103 Use only CLR with SAFE_ACCESS permission Removed rule

VA1219 Transparent data encryption should be enabled Description


change

VA1229 Filestream setting in registry and in SQL Server configuration Removed rule
should match

VA1230 Filestream should be disabled Description


change

VA1231 Filestream should be disabled (SQL) Removed rule

VA1234 Common Criteria setting should be enabled Removed rule

VA1235 Replication XPs should be disabled Title, description,


and Logic
change

VA1252 List of events being audited and centrally managed via server audit Removed rule
specifications.

VA1253 List of DB-scoped events being audited and centrally managed via Removed rule
server audit specifications.

VA1263 List all the active audits in the system Removed rule

VA1264 Auditing of both successful and failed login attempts should be Description
enabled change

VA1266 The 'MUST_CHANGE' option should be set on all SQL logins Removed rule

VA1276 Agent XPs feature should be disabled Removed rule

VA1281 All memberships for user-defined roles should be intended Logic change

VA1282 Orphan roles should be removed Logic change

VA1286 Database permissions shouldn't be granted directly to principals Removed rule


(OBJECT or COLUMN)

VA1288 Sensitive data columns should be classified Description


change

VA2030 Minimal set of principals should be granted database-scoped Removed rule


SELECT or EXECUTE permissions
Rule ID Rule Title Change details

VA2033 Minimal set of principals should be granted database-scoped Description


EXECUTE permission on objects or columns change

VA2062 Database-level firewall rules should not grant excessive access Description
change

VA2063 Server-level firewall rules should not grant excessive access Description
change

VA2100 Minimal set of principals should be granted high impact server- Removed rule
scoped permissions

VA2101 Minimal set of principals should be granted medium impact server- Removed rule
scoped permissions

VA2102 Minimal set of principals should be granted low impact server- Removed rule
scoped permissions

VA2103 Unnecessary execute permissions on extended stored procedures Logic change


should be revoked

VA2104 Execute permissions on extended stored procedures should be Removed rule


revoked from PUBLIC

VA2105 Login password should not be easily guessed Removed rule

VA2108 Minimal set of principals should be members of fixed high impact Logic change
database roles

VA2111 Sample databases should be removed Logic change

VA2112 Permissions from PUBLIC for Data Transformation Services (DTS) Removed rule
should be revoked

VA2113 Data Transformation Services (DTS) permissions should only be Description and
granted to SSIS roles logic change

VA2114 Minimal set of principals should be members of high impact fixed Logic change
server roles

VA2115 Minimal set of principals should be members of medium impact Removed rule
fixed server roles

VA2120 Features that may affect security should be disabled Logic change

VA2121 'OLE Automation Procedures' feature should be disabled Title and


description
change

VA2123 'Remote Access' feature should be disabled Removed rule


Rule ID Rule Title Change details

VA2126 Features that may affect security should be disabled Title, description,
and logic change

VA2127 'External Scripts' feature should be disabled Removed rule

VA2129 Changes to signed modules should be authorized Platform update

VA2130 Track all users with access to the database Description and
logic change

Next steps
SQL vulnerability assessment rules
SQL vulnerability assessment overview
Store vulnerability assessment scan results in a storage account accessible behind
firewalls and VNets
Optimized locking
Article • 05/03/2023

Applies to:
Azure SQL Database

This article introduces the optimized locking feature, a new SQL Server Database Engine
capability that offers an improved transaction locking mechanism that reduces lock
memory consumption and blocking for concurrent transactions.

What is optimized locking?


Optimized locking helps to reduce lock memory as very few locks are held for large
transactions. In addition, optimized locking also avoids lock escalations. This allows
more concurrent access to the table.

Optimized locking is composed of two primary components: Transaction ID (TID)


locking and lock after qualification (LAQ).

A transaction ID (TID) is a unique identifier of a transaction. Each row is labeled


with the last TID that modified it. Instead of potentially many key or row identifier
locks, a single lock on the TID is used. For more information, review the section on
Transaction ID (TID) locking.
Lock after qualification (LAQ) is an optimization that evaluates predicates of a
query on the latest committed version of the row without acquiring a lock, thus
improving concurrency. For more information, review the section on Lock after
qualification (LAQ).

For example:

Without optimized locking, updating 1 million rows in a table may require 1 million
exclusive (X) row locks held until the end of the transaction.
With optimized locking, updating 1 million rows in a table may require 1 million X
row locks but each lock is released as soon as each row is updated, and only one
TID lock will be held until the end of the transaction.

This article covers these two core concepts of optimized locking in detail.

Availability
Currently, optimized locking is available in Azure SQL Database only. For more
information, see Where is optimized locking currently available?
Is optimized locking enabled?
Optimized locking is enabled per user database. Connect to your database, then use the
following query to check if optimized locking is enabled on your database:

SQL

SELECT IsOptimizedLockingOn = DATABASEPROPERTYEX('testdb',


'IsOptimizedLockingOn');

If you are not connected to the database specified in DATABASEPROPERTYEX , the result will
be NULL . You should receive 0 (optimized locking is disabled) or 1 (enabled).

Optimized locking builds on other database features:

Optimized locking requires accelerated database recovery (ADR) to be enabled on


the database.
For the most benefit from optimized locking, read committed snapshot isolation
(RCSI) should be enabled for the database.

Both ADR and RCSI are enabled by default in Azure SQL Database. To verify that these
options are enabled for your current database, use the following T-SQL query:

SQL

SELECT name

, is_read_committed_snapshot_on

, is_accelerated_database_recovery_on

FROM sys.databases

WHERE name = db_name();

Locking overview
This is a short summary of the behavior when optimized locking is not enabled. For
more information, review the Transaction locking and row versioning guide.

In the Database Engine, locking is a mechanism that prevents multiple transactions from
updating the same data simultaneously, in order to protect data integrity and
consistency.

When a transaction needs to modify data, it can request a lock on the data. The lock is
granted if no other conflicting locks are held on the data, and the transaction can
proceed with the modification. If another conflicting lock is held on the data, the
transaction must wait for the lock to be released before it can proceed.
When multiple transactions are allowed to access the same data concurrently, the
Database Engine must resolve potentially complex conflicts with concurrent reads and
writes. Locking is one of the mechanisms by which the database engine can provide the
semantics for the ANSI SQL transaction isolation levels. Although locking in databases is
essential, reduced concurrency, deadlocks, complexity, and lock overhead can impact
performance and scalability.

Optimized locking and transaction ID (TID) locking


Every row in the Database Engine internally contains a transaction ID (TID) when row
versioning is in use. This TID is persisted on disk. Every transaction modifying a row will
stamp that row with its TID.

With TID locking, instead of taking the lock on the key of the row, a lock is taken on the
TID of the row. The modifying transaction will hold an X lock on its TID. Other
transactions will acquire an S lock on the TID to check if the first transaction is still
active. With TID locking, page and row locks continue to be taken for updates, but each
page and row lock is released as soon as each row is updated. The only lock held until
end of transaction is the X lock on the TID resource, replacing page and row (key) locks
as demonstrated in the next demo. (Other standard database and object locks are not
affected by optimized locking.)

Optimized locking helps to reduce lock memory as very few locks are held for large
transactions. In addition, optimized locking also avoids lock escalations. This allows
other concurrent transactions to access the table.

Consider the following T-SQL sample scenario that looks for locks on the user's current
session:

SQL

CREATE TABLE t0

(a int PRIMARY KEY not null

,b int null);

INSERT INTO t0 VALUES (1,10),(2,20),(3,30);

GO

BEGIN TRAN

UPDATE t0

SET b=b+10;

SELECT * FROM sys.dm_tran_locks WHERE request_session_id = @@SPID

AND resource_type in ('PAGE','RID','KEY','XACT');

COMMIT TRAN

GO

DROP TABLE IF EXISTS t0;

The same query without the benefit of optimized locking creates four locks:

The sys.dm_tran_locks dynamic management view (DMV) can be useful in examining or


troubleshooting locking issues, including observing optimized locking in action.

Optimized locking and lock after qualification (LAQ)


Building on the TID infrastructure, optimized locking changes how query predicates
secure locks.

Without optimized locking, predicates from queries are checked row by row in a scan by
first taking an update (U) row lock. If the predicate is satisfied, an X row lock is taken
before updating the row.

With optimized locking, and when the read committed snapshot isolation level (RCSI) is
enabled, predicates are applied on latest committed version without taking any row
locks. If the predicate does not satisfy, the query moves to the next row in the scan. If
the predicate is satisfied, an X row lock is taken to actually update the row. The X row
lock is released as soon as the row update is complete, before the end of the
transaction.

Since predicate evaluation is performed without acquiring any locks, concurrent queries
modifying different rows will not block each other.

Example:

SQL

CREATE TABLE t1

(a int not null

,b int null);

INSERT INTO t1 VALUES (1,10),(2,20),(3,30);

GO

Session 1 Session 2
Session 1 Session 2

BEGIN TRAN

UPDATE t1

SET b=b+10

WHERE a=1;

BEGIN TRAN

UPDATE t1

SET b=b+10

WHERE a=2;

COMMIT TRAN

COMMIT TRAN

Note that the behavior of blocking changes with optimized locking in the previous
example. Without optimized locking, Session 2 will be blocked.

However, with optimized locking, Session 2 will not be blocked as the latest committed
version of row 1 contains a=1, which does not satisfy the predicate of Session 2.

If the predicate is satisfied, we wait for any active transaction on the row to finish. If we
had to wait for the S TID lock, the row might have changed, and the latest committed
version might have changed. In that case, instead of aborting the transaction due to an
update conflict, the Database Engine will retry the predicate evaluation on the same row.
If the predicate qualifies upon retry, the row will be updated.

Consider the following example when a predicate change is automatically retried:

SQL

CREATE TABLE t2

(a int not null

,b int null);

INSERT INTO t2 VALUES (1,10),(2,20),(3,30);

GO

Session 1 Session 2

BEGIN TRAN

UPDATE t2

SET b=b+10

WHERE a=1;
Session 1 Session 2

BEGIN TRAN

UPDATE t2

SET b=b+10

WHERE a=1;

COMMIT TRAN

COMMIT TRAN

Query behavior changes with optimized locking and RCSI


Concurrent systems under read committed snapshot isolation level (RCSI) with
workloads that rely on strict execution order of transactions, might experience different
query behavior when optimized locking is enabled.

Consider the following example where transaction T2 is updating table t1 based on


column b that was updated during transaction T1.

SQL

CREATE TABLE t1 (a int not null, b int null);

INSERT INTO t1 VALUES (1,1);

GO

Session 1 Session 2

BEGIN TRAN T1

UPDATE t1

SET b=2

WHERE a=1;

BEGIN TRAN T2

UPDATE t1

SET b=3

WHERE b=2;

COMMIT TRAN

COMMIT TRAN

Let's evaluate the outcome of the above scenario with and without lock after
qualification (LAQ), an integral part of optimized locking.
Without LAQ

Without LAQ, transaction T2 will be blocked and wait for the transaction T1 to complete.

After both transactions commit, table t1 will contain the following rows:

a | b

1 | 3

With LAQ

With LAQ, transaction T2 will use the latest committed version of the row b ( b =1 in the
version store) to evaluate its predicate ( b =2). This row does not qualify; hence it is
skipped and T2 moves to the next row without having been blocked by transaction T1.
In this example, LAQ removes blocking but leads to different results.

After both transactions commit, table t1 will contain the following rows:

a | b

1 | 2

) Important

Even without LAQ, applications should not assume that SQL Server (under
versioning isolation levels) will guarantee strict ordering, without using locking
hints. Our general recommendation for customers on concurrent systems under
RCSI with workloads that rely on strict execution order of transactions (as shown in
the previous exercise), is to use stricter isolation levels.

Best practices with optimized locking

Enable read committed snapshot isolation (RCSI)


To maximize the benefits of optimized locking, it is recommended to enable read
committed snapshot isolation (RCSI) on the database and use read committed isolation
as the default isolation level. If not enabled, enable RCSI using the following sample:
SQL

ALTER DATABASE databasename SET READ_COMMITTED_SNAPSHOT ON;

In Azure SQL Database, RCSI is enabled by default and read committed is the default
isolation level. With RCSI enabled and when using read committed isolation level,
readers don't block writers and writers don't block readers. Readers read a version of the
row from the snapshot taken at the start of the query. With LAQ, writers will qualify rows
per the predicate based on the latest committed version of the row without acquiring U
locks. With LAQ, a query will wait only if the row qualifies and there is an active write
transaction on that row. Qualifying based on the latest committed version and locking
only the qualified rows reduces blocking and increases concurrency.

In addition to reduced blocking, the lock memory required will be reduced. This is
because readers don't take any locks, and writers take only short duration locks, instead
of locks that expire at the end of the transaction. When using stricter isolation levels like
repeatable read or serializable, the Database Engine is forced to hold row and page
locks until the end of the transaction, for both readers and writers, resulting in increased
blocking and lock memory.

Avoid locking hints


While table and query hints are honored, they reduce the benefit of optimized locking.
Lock hints like UPDLOCK, READCOMMITTEDLOCK, XLOCK, HOLDLOCK, etc., in your
queries reduce the full benefits of optimized locking. Having such lock hints in the
queries forces the Database Engine to take row/page locks and hold them until the end
of the transaction, to honor the intent of the lock hints. Some applications have logic
where lock hints are needed, for example when reading a row with select with UPDLOCK
and then updating it later. We recommend using lock hints only where needed.

With optimized locking, there are no restrictions on existing queries and queries do not
need to be rewritten. Queries that are not using hints will benefit most from optimized
locking.

A table hint on one table in a query will not disable optimized locking for other tables in
the same query. Further, optimized locking only affects the locking behavior of tables
being updated by an UPDATE statement. For example:

SQL

CREATE TABLE t3

(a int not null

, b int not null);

CREATE TABLE t4

(a int not null

, b int not null);

GO

INSERT INTO t3 VALUES (1,10),(2,20),(3,30);

INSERT INTO t4 VALUES (1,10),(2,20),(3,30);

GO

UPDATE t3 SET t3.b = t4.b

FROM t3

INNER JOIN t4 WITH (UPDLOCK) ON t3.a = t4.a;

In the previous query example, only table t4 will be affected by the locking hint, while
t3 can still benefit from optimized locking.

SQL

UPDATE t3 SET t3.b = t4.b

FROM t3 WITH (REPEATABLEREAD)

INNER JOIN t4 ON t3.a = t4.a;

In the previous query example, only table t3 will use the repeatable read isolation level,
and will hold locks until the end of the transaction. Other updates to t3 can still benefit
from optimized locking. The same applies to the HOLDLOCK hint.

Frequently asked questions (FAQ)

Where is optimized locking currently available?


Currently, optimized locking is available in Azure SQL Database.

Optimized locking is available in the following service tiers:

all DTU service tiers


all vCore service tiers, including provisioned and serverless

Optimized locking is not currently available in:

Azure SQL Managed Instance


SQL Server 2022 (16.x)

Is optimized locking on by default in both new and


existing databases?
In Azure SQL Database, yes.

How can I detect if optimized locking is enabled?


See Is optimized locking enabled?

What happens when accelerated database recovery (ADR)


is not enabled on my database?
If ADR is disabled, optimized locking is automatically disabled as well.

What if I want to force queries to block despite optimized


locking?
For customers using RCSI, to force blocking between two queries when optimized
locking is enabled, use the READCOMMITTEDLOCK query hint.

Can I disable optimized locking?


Currently, customers can create a support request to disable optimized locking.

Use the following steps to create a new support request from the Azure portal for Azure
SQL Database.

1. First, verify that optimized locking is enabled for your database.

2. On the Azure portal menu, select Help + support.


3. In Help + support, select Create a support request.

4. For Issue type, select Technical.

5. For Subscription, Service, and Resource, select the desired SQL Database.

6. In Summary, type "Disable optimized locking".

7. For Problem Type, choose Performance and Query Execution.

8. For Problem Subtype, choose Blocking and deadlocks.

9. In Additional details, provide as much information as possible for why you would
like to disable optimized locking. We are interested to review the reasons and use
cases for disabling optimized locking with you.

Next steps
Transaction locking and row versioning guide
Read committed snapshot isolation (RCSI)
sys.dm_tran_locks (Transact-SQL)
Accelerated database recovery in Azure SQL
Accelerated database recovery
Tutorial: Migrate SQL Server to an Azure
SQL Managed Instance offline using
DMS (classic)
Article • 03/08/2023

) Important

Azure Database Migration Service (classic) - SQL scenarios are on a deprecation


path . Beginning 01 August 2023, you will no longer be able to create new
Database Migration Service (classic) resource for SQL Server scenarios from Azure
portal and will be retired on 15 March 2026 for all customers. Please migrate to
Azure SQL database services by using the latest Azure Database Migration
Service version which is available as an extension in Azure Data Studio,or by
using Azure PowerShell and Azure CLI. Learn more .

7 Note

This tutorial uses an older version of the Azure Database Migration Service. For
improved functionality and supportability, consider migrating to Azure SQL
Managed Instance by using the Azure SQL migration extension for Azure Data
Studio.

To compare features between versions, review compare versions.

You can use Azure Database Migration Service to migrate the databases from a SQL
Server instance to an Azure SQL Managed Instance. For additional methods that may
require some manual effort, see the article SQL Server to Azure SQL Managed Instance.

In this tutorial, you migrate the AdventureWorks2016 database from an on-premises


instance of SQL Server to a SQL Managed Instance by using Azure Database Migration
Service.

You will learn how to:

" Register the Azure DataMigration resource provider.


" Create an instance of Azure Database Migration Service.
" Create a migration project by using Azure Database Migration Service.
" Run the migration.
" Monitor the migration.

) Important

For offline migrations from SQL Server to SQL Managed Instance, Azure Database
Migration Service can create the backup files for you. Alternately, you can provide
the latest full database backup in the SMB network share that the service will use to
migrate your databases. Each backup can be written to either a separate backup file
or multiple backup files. However, appending multiple backups into a single backup
media is not supported. Note that you can use compressed backups as well, to
reduce the likelihood of experiencing potential issues with migrating large backups.

 Tip

In Azure Database Migration Service, you can migrate your databases offline or
while they are online. In an offline migration, application downtime starts when the
migration starts. To limit downtime to the time it takes you to cut over to the new
environment after the migration, use an online migration. We recommend that you
test an offline migration to determine whether the downtime is acceptable. If the
expected downtime isn't acceptable, do an online migration.

This article describes an offline migration from SQL Server to a SQL Managed Instance.
For an online migration, see Migrate SQL Server to an SQL Managed Instance online
using DMS.

Prerequisites
To complete this tutorial, you need to:

Download and install SQL Server 2016 or later .

Enable the TCP/IP protocol, which is disabled by default during SQL Server Express
installation, by following the instructions in the article Enable or Disable a Server
Network Protocol.

Restore the AdventureWorks2016 database to the SQL Server instance.

Create a Microsoft Azure Virtual Network for Azure Database Migration Service by
using the Azure Resource Manager deployment model, which provides site-to-site
connectivity to your on-premises source servers by using either ExpressRoute or
VPN. Learn network topologies for SQL Managed Instance migrations using Azure
Database Migration Service. For more information about creating a virtual network,
see the Virtual Network Documentation, and especially the quickstart articles with
step-by-step details.

7 Note

During virtual network setup, if you use ExpressRoute with network peering to
Microsoft, add the following service endpoints to the subnet in which the
service will be provisioned:
Target database endpoint (for example, SQL endpoint, Azure Cosmos DB
endpoint, and so on)
Storage endpoint
Service bus endpoint

This configuration is necessary because Azure Database Migration Service


lacks internet connectivity.

Ensure that your virtual network Network Security Group rules don't block the
outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For
more detail on virtual network NSG traffic filtering, see the article Filter network
traffic with network security groups.

Configure your Windows Firewall for source database engine access.

Open your Windows Firewall to allow Azure Database Migration Service to access
the source SQL Server, which by default is TCP port 1433. If your default instance is
listening on some other port, add that to the firewall.

If you're running multiple named SQL Server instances using dynamic ports, you
may wish to enable the SQL Browser Service and allow access to UDP port 1434
through your firewalls so that Azure Database Migration Service can connect to a
named instance on your source server.

If you're using a firewall appliance in front of your source databases, you may need
to add firewall rules to allow Azure Database Migration Service to access the
source database(s) for migration, as well as files via SMB port 445.

Create a SQL Managed Instance by following the detail in the article Create a SQL
Managed Instance in the Azure portal.

Ensure that the logins used to connect the source SQL Server and target SQL
Managed Instance are members of the sysadmin server role.
7 Note

By default, Azure Database Migration Service only supports migrating SQL


logins. However, you can enable the ability to migrate Windows logins by:
Ensuring that the target SQL Managed Instance has AAD read access,
which can be configured via the Azure portal by a user with the Global
Administrator role.
Configuring your Azure Database Migration Service instance to enable
Windows user/group login migrations, which is set up via the Azure portal,
on the Configuration page. After enabling this setting, restart the service
for the changes to take effect.

After restarting the service, Windows user/group logins appear in the list of
logins available for migration. For any Windows user/group logins you
migrate, you are prompted to provide the associated domain name. Service
user accounts (account with domain name NT AUTHORITY) and virtual user
accounts (account name with domain name NT SERVICE) are not supported.

Create a network share that Azure Database Migration Service can use to back up
the source database.

Ensure that the service account running the source SQL Server instance has write
privileges on the network share that you created and that the computer account
for the source server has read/write access to the same share.

Make a note of a Windows user (and password) that has full control privilege on
the network share that you previously created. Azure Database Migration Service
impersonates the user credential to upload the backup files to Azure Storage
container for restore operation.

Create a blob container and retrieve its SAS URI by using the steps in the article
Manage Azure Blob Storage resources with Storage Explorer, be sure to select all
permissions (Read, Write, Delete, List) on the policy window while creating the SAS
URI. This detail provides Azure Database Migration Service with access to your
storage account container for uploading the backup files used for migrating
databases to SQL Managed Instance.

7 Note
Azure Database Migration Service does not support using an account level
SAS token when configuring the Storage Account settings during the
Configure Migration Settings step.

Ensure both the Azure Database Migration Service IP address and the Azure SQL
Managed Instance subnet can communicate with the blob container.

Register the resource provider


Register the Microsoft.DataMigration resource provider before you create your first
instance of the Database Migration Service.

1. Sign in to the Azure portal. Search for and select Subscriptions.

2. Select the subscription in which you want to create the instance of Azure Database
Migration Service, and then select Resource providers.

3. Search for migration, and then select Register for Microsoft.DataMigration.


Create an Azure Database Migration Service
instance
1. In the Azure portal menu or on the Home page, select Create a resource. Search
for and select Azure Database Migration Service.

2. On the Azure Database Migration Service screen, select Create.


Select the appropriate Source server type and Target server type, and choose the
Database Migration Service (Classic) option.

3. On the Create Migration Service basics screen:

Select the subscription.


Create a new resource group or choose an existing one.
Specify a name for the instance of the Azure Database Migration Service.
Select the location in which you want to create the instance of Azure
Database Migration Service.
Choose Azure as the service mode.
Select a pricing tier. For more information on costs and pricing tiers, see the
pricing page .
Select Next: Networking.

4. On the Create Migration Service networking screen:

Select an existing virtual network or create a new one. The virtual network
provides Azure Database Migration Service with access to the source server
and the target instance. For more information about how to create a virtual
network in the Azure portal, see the article Create a virtual network using the
Azure portal.
Select Review + Create to review the details and then select Create to create
the service.

After a few moments, your instance of the Azure Database Migration service
is created and ready to use:

7 Note

For additional detail, see the article Network topologies for Azure SQL Managed
Instance migrations using Azure Database Migration Service.
Create a migration project
After an instance of the service is created, locate it within the Azure portal, open it, and
then create a new migration project.

1. In the Azure portal menu, select All services. Search for and select Azure Database
Migration Services.

2. On the Azure Database Migration Services screen, select the Azure Database
Migration Service instance that you created.

3. Select New Migration Project.

4. On the New migration project screen, specify a name for the project, in the
Source server type text box, select SQL Server, in the Target server type text box,
select Azure SQL Database Managed Instance, and then for Choose type of
activity, select Offline data migration.
5. Select Create and run activity to create the project and run the migration activity.

Specify source details


1. On the Select source screen, specify the connection details for the source SQL
Server instance.

Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server
instance name. You can also use the IP Address for situations in which DNS name
resolution isn't possible.

2. If you haven't installed a trusted certificate on your server, select the Trust server
certificate check box.

When a trusted certificate isn't installed, SQL Server generates a self-signed


certificate when the instance is started. This certificate is used to encrypt the
credentials for client connections.
U Caution

TLS connections that are encrypted using a self-signed certificate does not
provide strong security. They are susceptible to man-in-the-middle attacks.
You should not rely on TLS using self-signed certificates in a production
environment or on servers that are connected to the internet.

3. Select Next: Select target

Specify target details


1. On the Select target screen, specify the connection details for the target, which is
the pre-provisioned SQL Managed Instance to which you're migrating the
AdventureWorks2016 database.

If you haven't already provisioned the SQL Managed Instance, select the link to
help you provision the instance. You can still continue with project creation and
then, when the SQL Managed Instance is ready, return to this specific project to
execute the migration.
2. Select Next: Select databases. On the Select databases screen, select the
AdventureWorks2016 database for migration.

) Important

If you use SQL Server Integration Services (SSIS), DMS does not currently
support migrating the catalog database for your SSIS projects/packages
(SSISDB) from SQL Server to SQL Managed Instance. However, you can
provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS
projects/packages to the destination SSISDB hosted by SQL Managed
Instance. For more information about migrating SSIS packages, see the article
Migrate SQL Server Integration Services packages to Azure.

3. Select Next: Select logins

Select logins
1. On the Select logins screen, select the logins that you want to migrate.

7 Note

By default, Azure Database Migration Service only supports migrating SQL


logins. To enable support for migrating Windows logins, see the Prerequisites
section of this tutorial.

2. Select Next: Configure migration settings.


Configure migration settings
1. On the Configure migration settings screen, provide the following details:

Parameter Description

Choose Choose the option I will provide latest backup files when you already have
source full backup files available for DMS to use for database migration. Choose the
backup option I will let Azure Database Migration Service create backup files when
option you want DMS to take the source database full backup at first and use it for
migration.

Network The local SMB network share that Azure Database Migration Service can take
location the source database backups to. The service account running source SQL
share Server instance must have write privileges on this network share. Provide an
FQDN or IP addresses of the server in the network share, for example,
'\\servername.domainname.com\backupfolder' or '\\IP address\backupfolder'.

User name Make sure that the Windows user has full control privilege on the network
share that you provided above. Azure Database Migration Service will
impersonate the user credential to upload the backup files to Azure Storage
container for restore operation. If TDE-enabled databases are selected for
migration, the above windows user must be the built-in administrator account
and User Account Control must be disabled for Azure Database Migration
Service to upload and delete the certificates files.)

Password Password for the user.

Storage The SAS URI that provides Azure Database Migration Service with access to
account your storage account container to which the service uploads the backup files
settings and that is used for migrating databases to SQL Managed Instance. Learn how
to get the SAS URI for blob container. This SAS URI must be for the blob
container, not for the storage account.

TDE If you're migrating the source databases with Transparent Data Encryption
Settings (TDE) enabled, you need to have write privileges on the target SQL Managed
Instance. Select the subscription in which the SQL Managed Instance
provisioned from the drop-down menu. Select the target Azure SQL Database
Managed Instance in the drop-down menu.
2. Select Next: Summary.

Review the migration summary


1. On the Summary screen, in the Activity name text box, specify a name for the
migration activity.

2. Review and verify the details associated with the migration project.
Run the migration
Select Start migration.

The migration activity window appears that displays the current migration status of
the databases and logins.

Monitor the migration


1. In the migration activity screen, select Refresh to update the display.

2. You can further expand the databases and logins categories to monitor the
migration status of the respective server objects.
3. After the migration completes, verify the target database on the SQL Managed
Instance environment.

Additional resources
For a tutorial showing you how to migrate a database to SQL Managed Instance
using the T-SQL RESTORE command, see Restore a backup to SQL Managed
Instance using the restore command.
For information about SQL Managed Instance, see What is SQL Managed Instance.
For information about connecting apps to SQL Managed Instance, see Connect
applications.
Quickstart: Run simple Python scripts
with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this quickstart, you'll run a set of simple Python scripts using SQL Server Machine
Learning Services, Azure SQL Managed Instance Machine Learning Services, or SQL
Server Big Data Clusters. You'll learn how to use the stored procedure
sp_execute_external_script to execute the script in a SQL Server instance.

Prerequisites
You need the following prerequisites to run this quickstart.

A SQL database on one of these platforms:


SQL Server Machine Learning Services. To install, see the Windows installation
guide or the Linux installation guide.
SQL Server 2019 Big Data Clusters. See how to enable Machine Learning
Services on SQL Server 2019 Big Data Clusters.
Azure SQL Managed Instance Machine Learning Services. For information, see
the Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.

Run a simple script


To run a Python script, you'll pass it as an argument to the system stored procedure,
sp_execute_external_script. This system stored procedure starts the Python runtime in
the context of SQL machine learning, passes data to Python, manages Python user
sessions securely, and returns any results to the client.

In the following steps, you'll run this example Python script in your database:

Python

a = 1

b = 2

c = a/b

d = a*b

print(c, d)

1. Open a new query window in Azure Data Studio connected to your SQL instance.

2. Pass the complete Python script to the sp_execute_external_script stored


procedure.

The script is passed through the @script argument. Everything inside the @script
argument must be valid Python code.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

a = 1

b = 2

c = a/b

d = a*b

print(c, d)

'

3. The correct result is calculated and the Python print function returns the result to
the Messages window.

It should look something like this.

Results

text

STDOUT message(s) from external script:

0.5 2

Run a Hello World script


A typical example script is one that just outputs the string "Hello World". Run the
following command.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'OutputDataSet = InputDataSet'

, @input_data_1 = N'SELECT 1 AS hello'

WITH RESULT SETS(([Hello World] INT));

GO

Inputs to the sp_execute_external_script stored procedure include:

Input Description

@language defines the language extension to call, in this case Python

@script defines the commands passed to the Python runtime. Your entire Python script
must be enclosed in this argument, as Unicode text. You could also add the text
to a variable of type nvarchar and then call the variable

@input_data_1 data returned by the query, passed to the Python runtime, which returns the
data as a data frame

WITH RESULT clause defines the schema of the returned data table for SQL machine learning,
SETS adding "Hello World" as the column name, int for the data type

The command outputs the following text:

Hello World

Use inputs and outputs


By default, sp_execute_external_script accepts a single dataset as input, which typically
you supply in the form of a valid SQL query. It then returns a single Python data frame
as output.

For now, let's use the default input and output variables of sp_execute_external_script :
InputDataSet and OutputDataSet.

1. Create a small table of test data.

SQL

CREATE TABLE PythonTestData (col1 INT NOT NULL)

INSERT INTO PythonTestData

VALUES (1);

INSERT INTO PythonTestData

VALUES (10);

INSERT INTO PythonTestData

VALUES (100);

GO

2. Use the SELECT statement to query the table.

SQL

SELECT *

FROM PythonTestData

Results

3. Run the following Python script. It retrieves the data from the table using the
SELECT statement, passes it through the Python runtime, and returns the data as a

data frame. The WITH RESULT SETS clause defines the schema of the returned data
table for SQL, adding the column name NewColName.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'OutputDataSet = InputDataSet;'

, @input_data_1 = N'SELECT * FROM PythonTestData;'

WITH RESULT SETS(([NewColName] INT NOT NULL));

Results

4. Now change the names of the input and output variables. The default input and
output variable names are InputDataSet and OutputDataSet, the following script
changes the names to SQL_in and SQL_out:

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'SQL_out = SQL_in;'

, @input_data_1 = N'SELECT 12 as Col;'

, @input_data_1_name = N'SQL_in'

, @output_data_1_name = N'SQL_out'

WITH RESULT SETS(([NewColName] INT NOT NULL));

Note that Python is case-sensitive. The input and output variables used in the
Python script (SQL_out, SQL_in) need to match the names defined with
@input_data_1_name and @output_data_1_name , including case.

 Tip

Only one input dataset can be passed as a parameter, and you can return only
one dataset. However, you can call other datasets from inside your Python
code and you can return outputs of other types in addition to the dataset. You
can also add the OUTPUT keyword to any parameter to have it returned with
the results.

5. You can also generate values just using the Python script with no input data
( @input_data_1 is set to blank).

The following script outputs the text "hello" and "world".

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pandas as pd

mytextvariable = pandas.Series(["hello", " ", "world"]);

OutputDataSet = pd.DataFrame(mytextvariable);

'

, @input_data_1 = N''

WITH RESULT SETS(([Col1] CHAR(20) NOT NULL));

Results

@script as input" />

 Tip

Python uses leading spaces to group statements. So when the imbedded Python
script spans multiple lines, as in the preceding script, don't try to indent the Python
commands to be in line with the SQL commands. For example, this script will
produce an error:

SQL
EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pandas as pd

mytextvariable = pandas.Series(["hello", " ", "world"]);

OutputDataSet = pd.DataFrame(mytextvariable);

'

, @input_data_1 = N''

WITH RESULT SETS(([Col1] CHAR(20) NOT NULL));

Check Python version


If you would like to see which version of Python is installed in your server, run the
following script.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import sys

print(sys.version)

'

GO

The Python print function returns the version to the Messages window. In the example
output below, you can see that in this case, Python version 3.5.2 is installed.

Results

text

STDOUT message(s) from external script:

3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC


v.1900 64 bit (AMD64)]

List Python packages


Microsoft provides a number of Python packages pre-installed with Machine Learning
Services in SQL Server 2016 (13.x), SQL Server 2017 (14.x), and SQL Server 2019 (15.x). In
SQL Server 2022 (16.x), you can download and install any custom Python runtimes and
packages as desired.

To see a list of which Python packages are installed, including version, run the following
script.
SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pkg_resources

import pandas

dists = [str(d) for d in pkg_resources.working_set]

OutputDataSet = pandas.DataFrame(dists)

'

WITH RESULT SETS(([Package] NVARCHAR(max)))

GO

The list is from pkg_resources.working_set in Python and returned to SQL as a data


frame.

Next steps
To learn how to use data structures when using Python in SQL machine learning, follow
this quickstart:

Quickstart: Data structures and objects using Python


Quickstart: Data structures and objects
using Python with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this quickstart, you'll learn how to use data structures and data types when using
Python in SQL Server Machine Learning Services, Azure SQL Managed Instance Machine
Learning Services, or on SQL Server Big Data Clusters. You'll learn about moving data
between Python and SQL Server, and the common issues that might occur.

SQL machine learning relies on the Python pandas package, which is great for working
with tabular data. However, you cannot pass a scalar from Python to your database and
expect it to just work. In this quickstart, you'll review some basic data structure
definitions, to prepare you for additional issues that you might run across when passing
tabular data between Python and the database.

Concepts to know up front include:

A data frame is a table with multiple columns.


A single column of a data frame is a list-like object called a series.
A single value of a data frame is called a cell and is accessed by index.

How would you expose the single result of a calculation as a data frame, if a data.frame
requires a tabular structure? One answer is to represent the single scalar value as a
series, which is easily converted to a data frame.

7 Note

When returning dates, Python in SQL uses DATETIME which has a restricted date
range of 1753-01-01(-53690) through 9999-12-31(2958463).

Prerequisites
You need the following prerequisites to run this quickstart.

A SQL database on one of these platforms:


SQL Server Machine Learning Services. To install, see the Windows installation
guide or the Linux installation guide.
SQL Server Big Data Clusters. See how to enable Machine Learning Services on
SQL Server Big Data Clusters.
Azure SQL Managed Instance Machine Learning Services. For information, see
the Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.

Scalar value as a series


This example does some simple math and converts a scalar into a series.

1. A series requires an index, which you can assign manually, as shown here, or
programmatically.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

a = 1

b = 2

c = a/b

print(c)

s = pandas.Series(c, index =["simple math example 1"])

print(s)

'

Because the series hasn't been converted to a data.frame, the values are returned
in the Messages window, but you can see that the results are in a more tabular
format.

Results

text

STDOUT message(s) from external script:

0.5

simple math example 1 0.5

dtype: float64

2. To increase the length of the series, you can add new values, using an array.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

a = 1

b = 2

c = a/b

d = a*b

s = pandas.Series([c,d])

print(s)

'

If you do not specify an index, an index is generated that has values starting with 0
and ending with the length of the array.

Results

text

STDOUT message(s) from external script:

0 0.5

1 2.0

dtype: float64

3. If you increase the number of index values, but don't add new data values, the
data values are repeated to fill the series.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

a = 1

b = 2

c = a/b

s = pandas.Series(c, index =["simple math example 1", "simple math


example 2"])

print(s)

'

Results

text

STDOUT message(s) from external script:

0.5

simple math example 1 0.5

simple math example 2 0.5

dtype: float64

Convert series to data frame


Having converted the scalar math results to a tabular structure, you still need to convert
them to a format that SQL machine learning can handle.

1. To convert a series to a data.frame, call the pandas DataFrame method.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pandas as pd

a = 1

b = 2

c = a/b

d = a*b

s = pandas.Series([c,d])

print(s)

df = pd.DataFrame(s)

OutputDataSet = df

'

WITH RESULT SETS((ResultValue FLOAT))

The result is shown below. Even if you use the index to get specific values from the
data.frame, the index values aren't part of the output.

Results

ResultValue

0.5

Output values into data.frame


Now you'll output specific values from two series of math results in a data.frame. The
first has an index of sequential values generated by Python. The second uses an
arbitrary index of string values.

1. The following example gets a value from the series using an integer index.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pandas as pd

a = 1

b = 2

c = a/b

d = a*b

s = pandas.Series([c,d])

print(s)

df = pd.DataFrame(s, index=[1])

OutputDataSet = df

'

WITH RESULT SETS((ResultValue FLOAT))

Results

ResultValue

2.0

Remember that the auto-generated index starts at 0. Try using an out of range
index value and see what happens.

2. Now get a single value from the other data frame using a string index.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pandas as pd

a = 1

b = 2

c = a/b

s = pandas.Series(c, index =["simple math example 1", "simple math


example 2"])

print(s)

df = pd.DataFrame(s, index=["simple math example 1"])

OutputDataSet = df

'

WITH RESULT SETS((ResultValue FLOAT))

Results

ResultValue

0.5

If you try to use a numeric index to get a value from this series, you get an error.

Next steps
To learn about writing advanced Python functions with SQL machine learning, follow this
quickstart:
Write advanced Python functions
Quickstart: Python functions with SQL
machine learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this quickstart, you'll learn how to use Python mathematical and utility functions with
SQL Server Machine Learning Services, Azure SQL Managed Instance Machine Learning
Services, or SQL Server Big Data Clusters. Statistical functions are often complicated to
implement in T-SQL, but can be done in Python with only a few lines of code.

Prerequisites
You need the following prerequisites to run this quickstart.

A SQL database on one of these platforms:


SQL Server Machine Learning Services. To install, see the Windows installation
guide or the Linux installation guide.
SQL Server Big Data Clusters. See how to enable Machine Learning Services on
SQL Server Big Data Clusters.
Azure SQL Managed Instance Machine Learning Services. For information, see
the Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.

Create a stored procedure to generate random


numbers
For simplicity, let's use the Python numpy package, that's installed and loaded by default.
The package contains hundreds of functions for common statistical tasks, among them
the random.normal function, which generates a specified number of random numbers
using the normal distribution, given a standard deviation and mean.

For example, the following Python code returns 100 numbers on a mean of 50, given a
standard deviation of 3.

Python

numpy.random.normal(size=100, loc=50, scale=3)

To call this line of Python from T-SQL, add the Python function in the Python script
parameter of sp_execute_external_script . The output expects a data frame, so use
pandas to convert it.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import numpy

import pandas

OutputDataSet = pandas.DataFrame(numpy.random.normal(size=100, loc=50,


scale=3));

'

, @input_data_1 = N' ;'

WITH RESULT SETS(([Density] FLOAT NOT NULL));

What if you'd like to make it easier to generate a different set of random numbers? You
define a stored procedure that gets the arguments from the user, then pass those
arguments into the Python script as variables.

SQL

CREATE PROCEDURE MyPyNorm (

@param1 INT

, @param2 INT

, @param3 INT

AS

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import numpy

import pandas

OutputDataSet = pandas.DataFrame(numpy.random.normal(size=mynumbers,
loc=mymean, scale=mysd));

'

, @input_data_1 = N' ;'

, @params = N' @mynumbers int, @mymean int, @mysd int'

, @mynumbers = @param1

, @mymean = @param2

, @mysd = @param3

WITH RESULT SETS(([Density] FLOAT NOT NULL));

The first line defines each of the SQL input parameters that are required when the
stored procedure is executed.

The line beginning with @params defines all variables used by the Python code, and
the corresponding SQL data types.
The lines that immediately follow map the SQL parameter names to the
corresponding Python variable names.

Now that you've wrapped the Python function in a stored procedure, you can easily call
the function and pass in different values, like this:

SQL

EXECUTE MyPyNorm @param1 = 100,@param2 = 50, @param3 = 3

Use Python utility functions for


troubleshooting
Python packages provide a variety of utility functions for investigating the current
Python environment. These functions can be useful if you're finding discrepancies in the
way your Python code performs in SQL Server and in outside environments.

For example, you might use system timing functions in the time package to measure
the amount of time used by Python processes and analyze performance issues.

SQL

EXECUTE sp_execute_external_script

@language = N'Python'

, @script = N'

import time

start_time = time.time()

# Run Python processes

elapsed_time = time.time() - start_time

'

, @input_data_1 = N' ;';

Next steps
To create a machine learning model using Python with SQL machine learning, follow this
quickstart:

Quickstart: Create and score a predictive model in Python


Quickstart: Create and score a predictive
model in Python with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this quickstart, you'll create and train a predictive model using Python. You'll save the
model to a table in your SQL Server instance, and then use the model to predict values
from new data using SQL Server Machine Learning Services, Azure SQL Managed
Instance Machine Learning Services, or SQL Server Big Data Clusters.

You'll create and execute two stored procedures running in SQL. The first one uses the
classic Iris flower data set and generates a Naïve Bayes model to predict an Iris species
based on flower characteristics. The second procedure is for scoring - it calls the model
generated in the first procedure to output a set of predictions based on new data. By
placing Python code in a SQL stored procedure, operations are contained in SQL, are
reusable, and can be called by other stored procedures and client applications.

By completing this quickstart, you'll learn:

" How to embed Python code in a stored procedure


" How to pass inputs to your code through inputs on the stored procedure
" How stored procedures are used to operationalize models

Prerequisites
You need the following prerequisites to run this quickstart.

A SQL database on one of these platforms:


SQL Server Machine Learning Services. To install, see the Windows installation
guide or the Linux installation guide.
SQL Server Big Data Clusters. See how to enable Machine Learning Services on
SQL Server Big Data Clusters.
Azure SQL Managed Instance Machine Learning Services. For information, see
the Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.
The sample data used in this exercise is the Iris sample data. Follow the instructions
in Iris demo data to create the sample database irissql.

Create a stored procedure that generates


models
In this step, you'll create a stored procedure that generates a model for predicting
outcomes.

1. Open Azure Data Studio, connect to your SQL instance, and open a new query
window.

2. Connect to the irissql database.

SQL

USE irissql

GO

3. Copy in the following code to create a new stored procedure.

When executed, this procedure calls sp_execute_external_script to start a Python


session.

Inputs needed by your Python code are passed as input parameters on this stored
procedure. Output will be a trained model, based on the Python scikit-learn library
for the machine learning algorithm.

This code uses pickle to serialize the model. The model will be trained using
data from columns 0 through 4 from the iris_data table.

The parameters you see in the second part of the procedure articulate data inputs
and model outputs. As much as possible, you want the Python code running in a
stored procedure to have clearly defined inputs and outputs that map to stored
procedure inputs and outputs passed in at run time.

SQL

CREATE PROCEDURE generate_iris_model (@trained_model VARBINARY(max)


OUTPUT)

AS

BEGIN

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pickle

from sklearn.naive_bayes import GaussianNB

GNB = GaussianNB()

trained_model = pickle.dumps(GNB.fit(iris_data[["Sepal.Length",
"Sepal.Width", "Petal.Length", "Petal.Width"]],
iris_data[["SpeciesId"]].values.ravel()))

'

, @input_data_1 = N'select "Sepal.Length", "Sepal.Width",


"Petal.Length", "Petal.Width", "SpeciesId" from iris_data'

, @input_data_1_name = N'iris_data'

, @params = N'@trained_model varbinary(max) OUTPUT'

, @trained_model = @trained_model OUTPUT;

END;

GO

4. Verify the stored procedure exists.

If the T-SQL script from the previous step ran without error, a new stored
procedure called generate_iris_model is created and added to the irissql database.
You can find stored procedures in the Azure Data Studio Object Explorer, under
Programmability.

Execute the procedure to create and train


models
In this step, you execute the procedure to run the embedded code, creating a trained
and serialized model as an output.

Models that are stored for reuse in your database are serialized as a byte stream and
stored in a VARBINARY(MAX) column in a database table. Once the model is created,
trained, serialized, and saved to a database, it can be called by other procedures or by
the PREDICT T-SQL function in scoring workloads.

1. Run the following script to execute the procedure. The specific statement for
executing a stored procedure is EXECUTE on the fourth line.

This particular script deletes an existing model of the same name ("Naive Bayes")
to make room for new ones created by rerunning the same procedure. Without
model deletion, an error occurs stating the object already exists. The model is
stored in a table called iris_models, provisioned when you created the irissql
database.

SQL

DECLARE @model varbinary(max);

DECLARE @new_model_name varchar(50)

SET @new_model_name = 'Naive Bayes'

EXECUTE generate_iris_model @model OUTPUT;

DELETE iris_models WHERE model_name = @new_model_name;

INSERT INTO iris_models (model_name, model) values(@new_model_name,


@model);

GO

2. Verify that the model was inserted.

SQL

SELECT * FROM dbo.iris_models

Results

model_name model

Naive Bayes 0x800363736B6C6561726E2E6E616976655F62617965730A...

Create and execute a stored procedure for


generating predictions
Now that you have created, trained, and saved a model, move on to the next step:
creating a stored procedure that generates predictions. You'll do this by calling
sp_execute_external_script to run a Python script that loads the serialized model and

gives it new data inputs to score.

1. Run the following code to create the stored procedure that performs scoring. At
run time, this procedure will load a binary model, use columns [1,2,3,4] as inputs,
and specify columns [0,5,6] as output.

SQL

CREATE PROCEDURE predict_species (@model VARCHAR(100))

AS

BEGIN

DECLARE @nb_model VARBINARY(max) = (

SELECT model

FROM iris_models

WHERE model_name = @model

);

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pickle

irismodel = pickle.loads(nb_model)

species_pred = irismodel.predict(iris_data[["Sepal.Length",
"Sepal.Width", "Petal.Length", "Petal.Width"]])

iris_data["PredictedSpecies"] = species_pred

OutputDataSet = iris_data[["id","SpeciesId","PredictedSpecies"]]

print(OutputDataSet)

'

, @input_data_1 = N'select id, "Sepal.Length", "Sepal.Width",


"Petal.Length", "Petal.Width", "SpeciesId" from iris_data'

, @input_data_1_name = N'iris_data'

, @params = N'@nb_model varbinary(max)'

, @nb_model = @nb_model

WITH RESULT SETS((

"id" INT

, "SpeciesId" INT

, "SpeciesId.Predicted" INT

));

END;

GO

2. Execute the stored procedure, giving the model name "Naive Bayes" so that the
procedure knows which model to use.

SQL

EXECUTE predict_species 'Naive Bayes';

GO

When you run the stored procedure, it returns a Python data.frame. This line of T-
SQL specifies the schema for the returned results: WITH RESULT SETS ( ("id" int,
"SpeciesId" int, "SpeciesId.Predicted" int)); . You can insert the results into a

new table, or return them to an application.

The results are 150 predictions about species using floral characteristics as inputs.
For the majority of the observations, the predicted species matches the actual
species.
This example has been made simple by using the Python iris dataset for both
training and scoring. A more typical approach would involve running a SQL query
to get the new data, and passing that into Python as InputDataSet .

Conclusion
In this exercise, you learned how to create stored procedures dedicated to different
tasks, where each stored procedure used the system stored procedure
sp_execute_external_script to start a Python process. Inputs to the Python process are

passed to sp_execute_external as parameters. Both the Python script itself and data
variables in a database are passed as inputs.

Generally, you should only plan on using Azure Data Studio with polished Python code,
or simple Python code that returns row-based output. As a tool, Azure Data Studio
supports query languages like T-SQL and returns flattened rowsets. If your code
generates visual output like a scatterplot or histogram, you need a separate tool or end-
user application that can render the image outside of the stored procedure.

For some Python developers who are used to writing all-inclusive script handling a
range of operations, organizing tasks into separate procedures might seem unnecessary.
But training and scoring have different use cases. By separating them, you can put each
task on a different schedule and scope permissions to each operation.

A final benefit is that the processes can be modified using parameters. In this exercise,
Python code that created the model (named "Naive Bayes" in this example) was passed
as an input to a second stored procedure calling the model in a scoring process. This
exercise only uses one model, but you can imagine how parameterizing the model in a
scoring task would make that script more useful.

Next steps
For more information on tutorials for Python with SQL machine learning, see:

Python tutorials
Deploy and make predictions with an
ONNX model and SQL machine learning
Article • 01/04/2023

In this quickstart, you'll learn how to train a model, convert it to ONNX, deploy it to
Azure SQL Edge, and then run native PREDICT on data using the uploaded ONNX
model.

This quickstart is based on scikit-learn and uses the Boston Housing dataset .

Before you begin


If you're using Azure SQL Edge, and you haven't deployed an Azure SQL Edge
module, follow the steps of deploy SQL Edge using the Azure portal.

Install Azure Data Studio.

Install Python packages needed for this quickstart:

1. Open New Notebook connected to the Python 3 Kernel.


2. Select Manage Packages
3. In the Installed tab, look for the following Python packages in the list of
installed packages. If any of these packages are not installed, select the Add
New tab, search for the package, and select Install.
scikit-learn
numpy
onnxmltools
onnxruntime
pyodbc
setuptools
skl2onnx
sqlalchemy

For each script part below, enter it in a cell in the Azure Data Studio notebook and
run the cell.

Train a pipeline
Split the dataset to use features to predict the median value of a house.
Python

import numpy as np

import onnxmltools

import onnxruntime as rt

import pandas as pd

import skl2onnx

import sklearn

import sklearn.datasets

from sklearn.datasets import load_boston

boston = load_boston()

boston

df = pd.DataFrame(data=np.c_[boston['data'], boston['target']],
columns=boston['feature_names'].tolist() + ['MEDV'])

target_column = 'MEDV'

# Split the data frame into features and target

x_train = pd.DataFrame(df.drop([target_column], axis = 1))

y_train = pd.DataFrame(df.iloc[:,df.columns.tolist().index(target_column)])

print("\n*** Training dataset x\n")

print(x_train.head())

print("\n*** Training dataset y\n")

print(y_train.head())

Output:

text

*** Training dataset x

CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX \

0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0

1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0

2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0

3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0

4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0

PTRATIO B LSTAT

0 15.3 396.90 4.98

1 17.8 396.90 9.14

2 17.8 392.83 4.03

3 18.7 394.63 2.94

4 18.7 396.90 5.33

*** Training dataset y

0 24.0

1 21.6

2 34.7

3 33.4

4 36.2

Name: MEDV, dtype: float64

Create a pipeline to train the LinearRegression model. You can also use other regression
models.

Python

from sklearn.compose import ColumnTransformer

from sklearn.linear_model import LinearRegression

from sklearn.pipeline import Pipeline

from sklearn.preprocessing import RobustScaler

continuous_transformer = Pipeline(steps=[('scaler', RobustScaler())])

# All columns are numeric - normalize them

preprocessor = ColumnTransformer(
transformers=[

('continuous', continuous_transformer, [i for i in


range(len(x_train.columns))])])

model = Pipeline(

steps=[

('preprocessor', preprocessor),

('regressor', LinearRegression())])

# Train the model

model.fit(x_train, y_train)

Check the accuracy of the model and then calculate the R2 score and mean squared
error.

Python

# Score the model

from sklearn.metrics import r2_score, mean_squared_error

y_pred = model.predict(x_train)

sklearn_r2_score = r2_score(y_train, y_pred)

sklearn_mse = mean_squared_error(y_train, y_pred)

print('*** Scikit-learn r2 score: {}'.format(sklearn_r2_score))

print('*** Scikit-learn MSE: {}'.format(sklearn_mse))

Output:

text
*** Scikit-learn r2 score: 0.7406426641094094

*** Scikit-learn MSE: 21.894831181729206

Convert the model to ONNX


Convert the data types to the supported SQL data types. This conversion will be
required for other dataframes as well.

Python

from skl2onnx.common.data_types import FloatTensorType, Int64TensorType,


DoubleTensorType

def convert_dataframe_schema(df, drop=None, batch_axis=False):

inputs = []

nrows = None if batch_axis else 1

for k, v in zip(df.columns, df.dtypes):

if drop is not None and k in drop:

continue

if v == 'int64':

t = Int64TensorType([nrows, 1])

elif v == 'float32':

t = FloatTensorType([nrows, 1])

elif v == 'float64':

t = DoubleTensorType([nrows, 1])

else:

raise Exception("Bad type")

inputs.append((k, t))

return inputs

Using skl2onnx , convert the LinearRegression model to the ONNX format and save it
locally.

Python

# Convert the scikit model to onnx format

onnx_model = skl2onnx.convert_sklearn(model, 'Boston Data',


convert_dataframe_schema(x_train), final_types=
[('variable1',FloatTensorType([1,1]))])

# Save the onnx model locally

onnx_model_path = 'boston1.model.onnx'

onnxmltools.utils.save_model(onnx_model, onnx_model_path)

7 Note
You may need to set the target_opset parameter for the skl2onnx.convert_sklearn
function if there is a mismatch between ONNX runtime version in SQL Edge and
skl2onnx packge. For more information, see the SQL Edge Release notes to get the
ONNX runtime version corresponding for the release, and pick the target_opset
for the ONNX runtime based on the ONNX backward compatibility matrix .

Test the ONNX model


After converting the model to ONNX format, score the model to show little to no
degradation in performance.

7 Note

ONNX Runtime uses floats instead of doubles so small discrepancies are possible.

Python

import onnxruntime as rt

sess = rt.InferenceSession(onnx_model_path)

y_pred = np.full(shape=(len(x_train)), fill_value=np.nan)

for i in range(len(x_train)):

inputs = {}

for j in range(len(x_train.columns)):

inputs[x_train.columns[j]] = np.full(shape=(1,1),
fill_value=x_train.iloc[i,j])

sess_pred = sess.run(None, inputs)

y_pred[i] = sess_pred[0][0][0]

onnx_r2_score = r2_score(y_train, y_pred)

onnx_mse = mean_squared_error(y_train, y_pred)

print()

print('*** Onnx r2 score: {}'.format(onnx_r2_score))

print('*** Onnx MSE: {}\n'.format(onnx_mse))

print('R2 Scores are equal' if sklearn_r2_score == onnx_r2_score else


'Difference in R2 scores: {}'.format(abs(sklearn_r2_score - onnx_r2_score)))

print('MSE are equal' if sklearn_mse == onnx_mse else 'Difference in MSE


scores: {}'.format(abs(sklearn_mse - onnx_mse)))

print()

Output:

text
*** Onnx r2 score: 0.7406426691136831

*** Onnx MSE: 21.894830759270633

R2 Scores are equal

MSE are equal

Insert the ONNX model


Store the model in Azure SQL Edge, in a models table in a database onnx . In the
connection string, specify the server address, username, and password.

Python

import pyodbc

server = '' # SQL Server IP address

username = '' # SQL Server username

password = '' # SQL Server password

# Connect to the master DB to create the new onnx database

connection_string = "Driver={ODBC Driver 17 for SQL Server};Server=" +


server + ";Database=master;UID=" + username + ";PWD=" + password + ";"

conn = pyodbc.connect(connection_string, autocommit=True)

cursor = conn.cursor()

database = 'onnx'

query = 'DROP DATABASE IF EXISTS ' + database

cursor.execute(query)

conn.commit()

# Create onnx database

query = 'CREATE DATABASE ' + database

cursor.execute(query)

conn.commit()

# Connect to onnx database

db_connection_string = "Driver={ODBC Driver 17 for SQL Server};Server=" +


server + ";Database=" + database + ";UID=" + username + ";PWD=" + password +
";"

conn = pyodbc.connect(db_connection_string, autocommit=True)

cursor = conn.cursor()

table_name = 'models'

# Drop the table if it exists

query = f'drop table if exists {table_name}'

cursor.execute(query)

conn.commit()

# Create the model table

query = f'create table {table_name} ( ' \

f'[id] [int] IDENTITY(1,1) NOT NULL, ' \

f'[data] [varbinary](max) NULL, ' \

f'[description] varchar(1000))'

cursor.execute(query)

conn.commit()

# Insert the ONNX model into the models table

query = f"insert into {table_name} ([description], [data]) values ('Onnx


Model',?)"

model_bits = onnx_model.SerializeToString()

insert_params = (pyodbc.Binary(model_bits))

cursor.execute(query, insert_params)

conn.commit()

Load the data


Load the data into SQL.

First, create two tables, features and target, to store subsets of the Boston housing
dataset.

Features contains all data being used to predict the target, median value.
Target contains the median value for each record in the dataset.

Python

import sqlalchemy

from sqlalchemy import create_engine

import urllib

db_connection_string = "Driver={ODBC Driver 17 for SQL Server};Server=" +


server + ";Database=" + database + ";UID=" + username + ";PWD=" + password +
";"

conn = pyodbc.connect(db_connection_string)

cursor = conn.cursor()

features_table_name = 'features'

# Drop the table if it exists

query = f'drop table if exists {features_table_name}'

cursor.execute(query)

conn.commit()

# Create the features table

query = \

f'create table {features_table_name} ( ' \

f' [CRIM] float, ' \

f' [ZN] float, ' \

f' [INDUS] float, ' \

f' [CHAS] float, ' \

f' [NOX] float, ' \

f' [RM] float, ' \

f' [AGE] float, ' \

f' [DIS] float, ' \

f' [RAD] float, ' \

f' [TAX] float, ' \

f' [PTRATIO] float, ' \

f' [B] float, ' \

f' [LSTAT] float, ' \

f' [id] int)'

cursor.execute(query)

conn.commit()

target_table_name = 'target'

# Create the target table

query = \

f'create table {target_table_name} ( ' \

f' [MEDV] float, ' \

f' [id] int)'

x_train['id'] = range(1, len(x_train)+1)

y_train['id'] = range(1, len(y_train)+1)

print(x_train.head())

print(y_train.head())

Finally, use sqlalchemy to insert the x_train and y_train pandas dataframes into the
tables features and target , respectively.

Python

db_connection_string = 'mssql+pyodbc://' + username + ':' + password + '@' +


server + '/' + database + '?driver=ODBC+Driver+17+for+SQL+Server'

sql_engine = sqlalchemy.create_engine(db_connection_string)

x_train.to_sql(features_table_name, sql_engine, if_exists='append',


index=False)

y_train.to_sql(target_table_name, sql_engine, if_exists='append',


index=False)

Now you can view the data in the database.

Run PREDICT using the ONNX model


With the model in SQL, run native PREDICT on the data using the uploaded ONNX
model.

7 Note

Change the notebook kernel to SQL to run the remaining cell.

SQL

USE onnx

DECLARE @model VARBINARY(max) = (

SELECT DATA

FROM dbo.models

WHERE id = 1

);

WITH predict_input

AS (

SELECT TOP (1000) [id]

, CRIM

, ZN

, INDUS

, CHAS

, NOX

, RM

, AGE

, DIS

, RAD

, TAX

, PTRATIO

, B

, LSTAT

FROM [dbo].[features]

SELECT predict_input.id

, p.variable1 AS MEDV

FROM PREDICT(MODEL = @model, DATA = predict_input, RUNTIME=ONNX) WITH


(variable1 FLOAT) AS p;

Next Steps
Machine Learning and AI with ONNX in SQL Edge
Quickstart: Run simple R scripts with
SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this quickstart, you'll run a set of simple R scripts using Azure SQL Managed Instance
Machine Learning Services. You'll learn how to use the stored procedure
sp_execute_external_script to execute the script in your database.

Prerequisites
You need the following prerequisites to run this quickstart.

Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.

Run a simple script


To run an R script, you'll pass it as an argument to the system stored procedure,
sp_execute_external_script. This system stored procedure starts the R runtime, passes
data to R, manages R user sessions securely, and returns any results to the client.

In the following steps, you'll run this example R script:

a <- 1

b <- 2

c <- a/b

d <- a*b

print(c(c, d))

1. Open Azure Data Studio and connect to your server.

2. Pass the complete R script to the sp_execute_external_script stored procedure.

The script is passed through the @script argument. Everything inside the @script
argument must be valid R code.
SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

a <- 1

b <- 2

c <- a/b

d <- a*b

print(c(c, d))

'

3. The correct result is calculated and the R print function returns the result to the
Messages window.

It should look something like this.

Results

text

STDOUT message(s) from external script:

0.5 2

Run a Hello World script


A typical example script is one that just outputs the string "Hello World". Run the
following command.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'OutputDataSet<-InputDataSet'

, @input_data_1 = N'SELECT 1 AS hello'

WITH RESULT SETS(([Hello World] INT));

GO

Inputs to the sp_execute_external_script stored procedure include:

Input Description

@language defines the language extension to call, in this case, R

@script defines the commands passed to the R runtime. Your entire R script must be
enclosed in this argument, as Unicode text. You could also add the text to a
variable of type nvarchar and then call the variable
Input Description

@input_data_1 data returned by the query, passed to the R runtime, which returns the data as a
data frame

WITH RESULT clause defines the schema of the returned data table, adding "Hello World" as
SETS the column name, int for the data type

The command outputs the following text:

Hello World

Use inputs and outputs


By default, sp_execute_external_script accepts a single dataset as input, which typically
you supply in the form of a valid SQL query. It then returns a single R data frame as
output.

For now, let's use the default input and output variables of sp_execute_external_script :
InputDataSet and OutputDataSet.

1. Create a small table of test data.

SQL

CREATE TABLE RTestData (col1 INT NOT NULL)

INSERT INTO RTestData

VALUES (1);

INSERT INTO RTestData

VALUES (10);

INSERT INTO RTestData

VALUES (100);

GO

2. Use the SELECT statement to query the table.

SQL

SELECT *

FROM RTestData

Results

3. Run the following R script. It retrieves the data from the table using the SELECT
statement, passes it through the R runtime, and returns the data as a data frame.
The WITH RESULT SETS clause defines the schema of the returned data table for
SQL, adding the column name NewColName.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'OutputDataSet <- InputDataSet;'

, @input_data_1 = N'SELECT * FROM RTestData;'

WITH RESULT SETS(([NewColName] INT NOT NULL));

Results

4. Now let's change the names of the input and output variables. The default input
and output variable names are InputDataSet and OutputDataSet, this script
changes the names to SQL_in and SQL_out:

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N' SQL_out <- SQL_in;'

, @input_data_1 = N' SELECT 12 as Col;'

, @input_data_1_name = N'SQL_in'

, @output_data_1_name = N'SQL_out'

WITH RESULT SETS(([NewColName] INT NOT NULL));

Note that R is case-sensitive. The input and output variables used in the R script
(SQL_out, SQL_in) need to match the names defined with @input_data_1_name and
@output_data_1_name , including case.

 Tip
Only one input dataset can be passed as a parameter, and you can return only
one dataset. However, you can call other datasets from inside your R code
and you can return outputs of other types in addition to the dataset. You can
also add the OUTPUT keyword to any parameter to have it returned with the
results.

5. You also can generate values just using the R script with no input data
( @input_data_1 is set to blank).

The following script outputs the text "hello" and "world".

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

mytextvariable <- c("hello", " ", "world");

OutputDataSet <- as.data.frame(mytextvariable);

'

, @input_data_1 = N''

WITH RESULT SETS(([Col1] CHAR(20) NOT NULL));

Results

@script as input" />

Check R version
If you would like to see which version of R is installed, run the following script.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'print(version)';

GO

The R print function returns the version to the Messages window. In the example
output below, you can see that in this case, R version 3.4.4 is installed.

Results

text
STDOUT message(s) from external script:

platform x86_64-w64-mingw32
arch x86_64

os mingw32

system x86_64, mingw32

status

major 3

minor 4.4

year 2018

month 03

day 15

svn rev 74408

language R

version.string R version 3.4.4 (2018-03-15)

nickname Someone to Lean On

List R packages
To see a list of which R packages are installed, including version, dependencies, license,
and library path information, run the following script.

SQL

EXEC sp_execute_external_script @language = N'R'

, @script = N'

OutputDataSet <- data.frame(installed.packages()[,c("Package", "Version",


"Depends", "License", "LibPath")]);'

WITH result sets((

Package NVARCHAR(255)

, Version NVARCHAR(100)

, Depends NVARCHAR(4000)

, License NVARCHAR(1000)

, LibPath NVARCHAR(2000)

));

The output is from installed.packages() in R and is returned as a result set.

Results
Next steps
To learn how to use data structures when using R with SQL machine learning, follow this
quickstart:

Handle data types and objects using R with SQL machine learning
Quickstart: Data structures, data types,
and objects using R with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this quickstart, you'll learn how to use data structures and data types when using R in
Azure SQL Managed Instance Machine Learning Services. You'll learn about moving data
between R and SQL Managed Instance, and the common issues that might occur.

Common issues to know up front include:

Data types sometimes don't match


Implicit conversions might take place
Cast and convert operations are sometimes required
R and SQL use different data objects

Prerequisites
You need the following prerequisites to run this quickstart.

Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.

Always return a data frame


When your script returns results from R to SQL Server, it must return the data as a
data.frame. Any other type of object that you generate in your script - whether that be a
list, factor, vector, or binary data - must be converted to a data frame if you want to
output it as part of the stored procedure results. Fortunately, there are multiple R
functions to support changing other objects to a data frame. You can even serialize a
binary model and return it in a data frame, which you'll do later in this quickstart.

First, let's experiment with some R basic R objects - vectors, matrices, and lists - and see
how conversion to a data frame changes the output passed to SQL Server.
Compare these two "Hello World" scripts in R. The scripts look almost identical, but the
first returns a single column of three values, whereas the second returns three columns
with a single value each.

Example 1

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N' mytextvariable <- c("hello", " ", "world");

OutputDataSet <- as.data.frame(mytextvariable);'

, @input_data_1 = N' ';

Example 2

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N' OutputDataSet<- data.frame(c("hello"), " ",


c("world"));'

, @input_data_1 = N' ';

Identify schema and data types


Why are the results so different?

The answer can usually be found by using the R str() command. Add the function
str(object_name) anywhere in your R script to have the data schema of the specified R

object returned as an informational message.

To figure out why Example 1 and Example 2 have such different results, insert the line
str(OutputDataSet) at the end of the @script variable definition in each statement, like

this:

Example 1 with str function added

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N' mytextvariable <- c("hello", " ", "world");

OutputDataSet <- as.data.frame(mytextvariable);

str(OutputDataSet);'

, @input_data_1 = N' '

Example 2 with str function added

SQL

EXECUTE sp_execute_external_script

@language = N'R',

@script = N' OutputDataSet <- data.frame(c("hello"), " ", c("world"));

str(OutputDataSet);' ,

@input_data_1 = N' ';

Now, review the text in Messages to see why the output is different.

Results - Example 1

SQL

STDOUT message(s) from external script:

'data.frame': 3 obs. of 1 variable:

$ mytextvariable: Factor w/ 3 levels " ","hello","world": 2 1 3

Results - Example 2

SQL

STDOUT message(s) from external script:

'data.frame': 1 obs. of 3 variables:

$ c..hello..: Factor w/ 1 level "hello": 1

$ X... : Factor w/ 1 level " ": 1

$ c..world..: Factor w/ 1 level "world": 1

As you can see, a slight change in R syntax had a big effect on the schema of the results.
We won't go into why, but the differences in R data types are explained in details in the
Data Structures section in "Advanced R" by Hadley Wickham .

For now, just be aware that you need to check the expected results when coercing R
objects into data frames.

 Tip

You can also use R identity functions, such as is.matrix , is.vector , to return
information about the internal data structure.
Implicit conversion of data objects
Each R data object has its own rules for how values are handled when combined with
other data objects if the two data objects have the same number of dimensions, or if
any data object contains heterogeneous data types.

First, create a small table of test data.

SQL

CREATE TABLE RTestData (col1 INT NOT NULL)

INSERT INTO RTestData

VALUES (1);

INSERT INTO RTestData

VALUES (10);

INSERT INTO RTestData

VALUES (100);

GO

For example, assume you run the following statement to perform matrix multiplication
using R. You multiply a single-column matrix with the three values by an array with four
values, and expect a 4x3 matrix as a result.

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

x <- as.matrix(InputDataSet);

y <- array(12:15);

OutputDataSet <- as.data.frame(x %*% y);'

, @input_data_1 = N' SELECT [Col1] from RTestData;'

WITH RESULT SETS (([Col1] int, [Col2] int, [Col3] int, Col4 int));

Under the covers, the column of three values is converted to a single-column matrix.
Because a matrix is just a special case of an array in R, the array y is implicitly coerced to
a single-column matrix to make the two arguments conform.

Results

Col1 Col2 Col3 Col4

12 13 14 15
Col1 Col2 Col3 Col4

120 130 140 150

1200 1300 1400 1500

However, note what happens when you change the size of the array y .

SQL

execute sp_execute_external_script

@language = N'R'

, @script = N'

x <- as.matrix(InputDataSet);

y <- array(12:14);

OutputDataSet <- as.data.frame(y %*% x);'

, @input_data_1 = N' SELECT [Col1] from RTestData;'

WITH RESULT SETS (([Col1] int ));

Now R returns a single value as the result.

Results

Col1

1542

Why? In this case, because the two arguments can be handled as vectors of the same
length, R returns the inner product as a matrix. This is the expected behavior according
to the rules of linear algebra; however, it could cause problems if your downstream
application expects the output schema to never change!

 Tip

Getting errors? Make sure that you're running the stored procedure in the context
of the database that contains the table, and not in master or another database.

Also, we suggest that you avoid using temporary tables for these examples. Some R
clients will terminate a connection between batches, deleting temporary tables.

Merge or multiply columns of different length


R provides great flexibility for working with vectors of different sizes, and for combining
these column-like structures into data frames. Lists of vectors can look like a table, but
they don't follow all the rules that govern database tables.

For example, the following script defines a numeric array of length 6 and stores it in the
R variable df1 . The numeric array is then combined with the integers of the RTestData
table, which contains three (3) values, to make a new data frame, df2 .

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

df1 <- as.data.frame( array(1:6) );

df2 <- as.data.frame( c( InputDataSet , df1 ));

OutputDataSet <- df2'

, @input_data_1 = N' SELECT [Col1] from RTestData;'

WITH RESULT SETS (( [Col2] int not null, [Col3] int not null ));

To fill out the data frame, R repeats the elements retrieved from RTestData as many
times as needed to match the number of elements in the array df1 .

Results

Col2 Col3

1 1

10 2

100 3

1 4

10 5

100 6

Remember that a data frame only looks like a table, and is actually a list of vectors.

Cast or convert data


R and SQL Server don't use the same data types, so when you run a query in SQL Server
to get data and then pass that to the R runtime, some type of implicit conversion usually
takes place. Another set of conversions takes place when you return data from R to SQL
Server.

SQL Server pushes the data from the query to the R process managed by the
Launchpad service and converts it to an internal representation for greater
efficiency.
The R runtime loads the data into a data.frame variable and performs its own
operations on the data.
The database engine returns the data to SQL Server using a secured internal
connection and presents the data in terms of SQL Server data types.
You get the data by connecting to SQL Server using a client or network library that
can issue SQL queries and handle tabular data sets. This client application can
potentially affect the data in other ways.

To see how this works, run a query such as this one on the AdventureWorksDW data
warehouse. This view returns sales data used in creating forecasts.

SQL

USE AdventureWorksDW

GO

SELECT ReportingDate

, CAST(ModelRegion as varchar(50)) as ProductSeries

, Amount

FROM [AdventureWorksDW].[dbo].[vTimeSeries]

WHERE [ModelRegion] = 'M200 Europe'

ORDER BY ReportingDate ASC

7 Note

You can use any version of AdventureWorks, or create a different query using a
database of your own. The point is to try to handle some data that contains text,
datetime and numeric values.

Now, try pasting this query as the input to the stored procedure.

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N' str(InputDataSet);

OutputDataSet <- InputDataSet;'

, @input_data_1 = N'

SELECT ReportingDate

, CAST(ModelRegion as varchar(50)) as ProductSeries

, Amount

FROM [AdventureWorksDW].[dbo].[vTimeSeries]

WHERE [ModelRegion] = ''M200 Europe''

ORDER BY ReportingDate ASC ;'

WITH RESULT SETS undefined;

If you get an error, you'll probably need to make some edits to the query text. For
example, the string predicate in the WHERE clause must be enclosed by two sets of
single quotation marks.

After you get the query working, review the results of the str function to see how R
treats the input data.

Results

text

STDOUT message(s) from external script: 'data.frame': 37 obs. of 3


variables:

STDOUT message(s) from external script: $ ReportingDate: POSIXct, format:


"2010-12-24 23:00:00" "2010-12-24 23:00:00"

STDOUT message(s) from external script: $ ProductSeries: Factor w/ 1 levels


"M200 Europe",..: 1 1 1 1 1 1 1 1 1 1

STDOUT message(s) from external script: $ Amount : num 3400 16925


20350 16950 16950

The datetime column has been processed using the R data type, POSIXct.
The text column "ProductSeries" has been identified as a factor, meaning a
categorical variable. String values are handled as factors by default. If you pass a
string to R, it is converted to an integer for internal use, and then mapped back to
the string on output.

Summary
From even these short examples, you can see the need to check the effects of data
conversion when passing SQL queries as input. Because some SQL Server data types are
not supported by R, consider these ways to avoid errors:

Test your data in advance and verify columns or values in your schema that could
be a problem when passed to R code.
Specify columns in your input data source individually, rather than using SELECT * ,
and know how each column will be handled.
Perform explicit casts as necessary when preparing your input data, to avoid
surprises.
Avoid passing columns of data (such as GUIDs or rowguids) that cause errors and
aren't useful for modeling.

For more information on supported and unsupported data types, see R libraries and
data types.
Next steps
To learn about writing advanced R functions with SQL machine learning, follow this
quickstart:

Write advanced R functions with SQL machine learning


Quickstart: R functions with SQL
machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this quickstart, you'll learn how to use data structures and data types when using R in
Azure SQL Managed Instance Machine Learning Services. You'll learn about moving data
between R and SQL Managed Instance, and the common issues that might occur.

Prerequisites
You need the following prerequisites to run this quickstart.

Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.

Create a stored procedure to generate random


numbers
For simplicity, let's use the R stats package, that's installed and loaded by default. The
package contains hundreds of functions for common statistical tasks, among them the
rnorm function, which generates a specified number of random numbers using the
normal distribution, given a standard deviation and mean.

For example, the following R code returns 100 numbers on a mean of 50, given a
standard deviation of 3.

as.data.frame(rnorm(100, mean = 50, sd = 3));

To call this line of R from T-SQL, add the R function in the R script parameter of
sp_execute_external_script , like this:

SQL
EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

OutputDataSet <- as.data.frame(rnorm(100, mean = 50, sd =3));'

, @input_data_1 = N' ;'

WITH RESULT SETS (([Density] float NOT NULL));

What if you'd like to make it easier to generate a different set of random numbers?

That's easy when combined with T-SQL. You define a stored procedure that gets the
arguments from the user, then pass those arguments into the R script as variables.

SQL

CREATE PROCEDURE MyRNorm (

@param1 INT

, @param2 INT

, @param3 INT

AS

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

OutputDataSet <- as.data.frame(rnorm(mynumbers, mymean, mysd));'

, @input_data_1 = N' ;'

, @params = N' @mynumbers int, @mymean int, @mysd int'

, @mynumbers = @param1

, @mymean = @param2

, @mysd = @param3

WITH RESULT SETS(([Density] FLOAT NOT NULL));

The first line defines each of the SQL input parameters that are required when the
stored procedure is executed.

The line beginning with @params defines all variables used by the R code, and the
corresponding SQL data types.

The lines that immediately follow map the SQL parameter names to the
corresponding R variable names.

Now that you've wrapped the R function in a stored procedure, you can easily call the
function and pass in different values, like this:

SQL

EXECUTE MyRNorm @param1 = 100,@param2 = 50, @param3 = 3

Use R utility functions for troubleshooting


The utils package, installed by default, provides a variety of utility functions for
investigating the current R environment. These functions can be useful if you're finding
discrepancies in the way your R code performs in SQL Server and in outside
environments.

For example, you might use the system timing functions in R, such as system.time and
proc.time , to capture the time used by R processes and analyze performance issues. For

an example, see the tutorial Create Data Features where R timing functions are
embedded in the solution.

SQL

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

library(utils);

start.time <- proc.time();

# Run R processes

elapsed_time <- proc.time() - start.time;'

For other useful functions, see Use R code profiling functions to improve performance.

Next steps
To create a machine learning model using R with SQL machine learning, follow this
quickstart:

Create and score a predictive model in R with SQL machine learning


Quickstart: Create and score a predictive
model in R with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this quickstart, you'll create and train a predictive model using T. You'll save the
model to a table in your SQL Server instance, and then use the model to predict values
from new data using Azure SQL Managed Instance Machine Learning Services.

You'll create and execute two stored procedures running in SQL. The first one uses the
mtcars dataset included with R and generates a simple generalized linear model (GLM)
that predicts the probability that a vehicle has been fitted with a manual transmission.
The second procedure is for scoring - it calls the model generated in the first procedure
to output a set of predictions based on new data. By placing R code in a SQL stored
procedure, operations are contained in SQL, are reusable, and can be called by other
stored procedures and client applications.

 Tip

If you need a refresher on linear models, try this tutorial which describes the
process of fitting a model using rxLinMod: Fitting Linear Models

By completing this quickstart, you'll learn:

" How to embed R code in a stored procedure


" How to pass inputs to your code through inputs on the stored procedure
" How stored procedures are used to operationalize models

Prerequisites
You need the following prerequisites to run this quickstart.

Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.
Create the model
To create the model, you'll create source data for training, create the model and train it
using the data, then store the model in a database where it can be used to generate
predictions with new data.

Create the source data


1. Open Azure Data Studio, connect to your instance, and open a new query window.

2. Create a table to save the training data.

SQL

CREATE TABLE dbo.MTCars(

mpg decimal(10, 1) NOT NULL,

cyl int NOT NULL,

disp decimal(10, 1) NOT NULL,

hp int NOT NULL,

drat decimal(10, 2) NOT NULL,

wt decimal(10, 3) NOT NULL,

qsec decimal(10, 2) NOT NULL,

vs int NOT NULL,

am int NOT NULL,

gear int NOT NULL,

carb int NOT NULL

);

3. Insert the data from the built-in dataset mtcars .

SQL

INSERT INTO dbo.MTCars

EXEC sp_execute_external_script @language = N'R'

, @script = N'MTCars <- mtcars;'

, @input_data_1 = N''

, @output_data_1_name = N'MTCars';

 Tip

Many datasets, small and large, are included with the R runtime. To get a list
of datasets installed with R, type library(help="datasets") from an R
command prompt.
Create and train the model
The car speed data contains two columns, both numeric: horsepower ( hp ) and weight
( wt ). From this data, you'll create a generalized linear model (GLM) that estimates the
probability that a vehicle has been fitted with a manual transmission.

To build the model, you define the formula inside your R code, and pass the data as an
input parameter.

SQL

DROP PROCEDURE IF EXISTS generate_GLM;

GO

CREATE PROCEDURE generate_GLM

AS

BEGIN

EXEC sp_execute_external_script

@language = N'R'

, @script = N'carsModel <- glm(formula = am ~ hp + wt, data =


MTCarsData, family = binomial);

trained_model <- data.frame(payload = as.raw(serialize(carsModel,


connection=NULL)));'

, @input_data_1 = N'SELECT hp, wt, am FROM MTCars'

, @input_data_1_name = N'MTCarsData'

, @output_data_1_name = N'trained_model'

WITH RESULT SETS ((model VARBINARY(max)));

END;

GO

The first argument to glm is the formula parameter, which defines am as


dependent on hp + wt .
The input data is stored in the variable MTCarsData , which is populated by the SQL
query. If you don't assign a specific name to your input data, the default variable
name would be InputDataSet.

Store the model in the database


Next, store the model in a database so you can use it for prediction or retrain it.

1. Create a table to store the model.

The output of an R package that creates a model is usually a binary object.


Therefore, the table where you store the model must provide a column of
varbinary(max) type.

SQL
CREATE TABLE GLM_models (

model_name varchar(30) not null default('default model') primary


key,

model varbinary(max) not null

);

2. Run the following Transact-SQL statement to call the stored procedure, generate
the model, and save it to the table you created.

SQL

INSERT INTO GLM_models(model)

EXEC generate_GLM;

 Tip

If you run this code a second time, you get this error: "Violation of PRIMARY
KEY constraint...Cannot insert duplicate key in object
dbo.stopping_distance_models". One option for avoiding this error is to
update the name for each new model. For example, you could change the
name to something more descriptive, and include the model type, the day
you created it, and so forth.

SQL

UPDATE GLM_models

SET model_name = 'GLM_' + format(getdate(), 'yyyy.MM.HH.mm', 'en-gb')


WHERE model_name = 'default model'

Score new data using the trained model


Scoring is a term used in data science to mean generating predictions, probabilities, or
other values based on new data fed into a trained model. You'll use the model you
created in the previous section to score predictions against new data.

Create a table of new data


First, create a table with new data.

SQL
CREATE TABLE dbo.NewMTCars(

hp INT NOT NULL

, wt DECIMAL(10,3) NOT NULL

, am INT NULL

GO

INSERT INTO dbo.NewMTCars(hp, wt)

VALUES (110, 2.634)

INSERT INTO dbo.NewMTCars(hp, wt)

VALUES (72, 3.435)

INSERT INTO dbo.NewMTCars(hp, wt)

VALUES (220, 5.220)

INSERT INTO dbo.NewMTCars(hp, wt)

VALUES (120, 2.800)

GO

Predict manual transmission


To get predictions based on your model, write a SQL script that does the following:

1. Gets the model you want


2. Gets the new input data
3. Calls an R prediction function that is compatible with that model

Over time, the table might contain multiple R models, all built using different
parameters or algorithms, or trained on different subsets of data. In this example, we'll
use the model named default model .

SQL

DECLARE @glmmodel varbinary(max) =

(SELECT model FROM dbo.GLM_models WHERE model_name = 'default model');

EXEC sp_execute_external_script

@language = N'R'

, @script = N'

current_model <- unserialize(as.raw(glmmodel));

new <- data.frame(NewMTCars);

predicted.am <- predict(current_model, new, type = "response");

str(predicted.am);

OutputDataSet <- cbind(new, predicted.am);

'

, @input_data_1 = N'SELECT hp, wt FROM dbo.NewMTCars'

, @input_data_1_name = N'NewMTCars'

, @params = N'@glmmodel varbinary(max)'

, @glmmodel = @glmmodel

WITH RESULT SETS ((new_hp INT, new_wt DECIMAL(10,3), predicted_am


DECIMAL(10,3)));

The script above performs the following steps:

Use a SELECT statement to get a single model from the table, and pass it as an
input parameter.

After retrieving the model from the table, call the unserialize function on the
model.

Apply the predict function with appropriate arguments to the model, and provide
the new input data.

7 Note

In the example, the str function is added during the testing phase, to check the
schema of data being returned from R. You can remove the statement later.

The column names used in the R script are not necessarily passed to the stored
procedure output. Here the WITH RESULTS clause is used to define some new
column names.

Results

It's also possible to use the PREDICT (Transact-SQL) statement to generate a predicted
value or score based on a stored model.

Next steps
For more information on tutorials for R with SQL machine learning, see:

R tutorials
Python tutorial: Predict ski rental with
linear regression with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this four-part tutorial series, you will use Python and linear regression in Azure SQL
Managed Instance Machine Learning Services to predict the number of ski rentals. The
tutorial uses a Python notebook in Azure Data Studio.

Imagine you own a ski rental business and you want to predict the number of rentals
that you'll have on a future date. This information will help you get your stock, staff, and
facilities ready.

In the first part of this series, you'll get set up with the prerequisites. In parts two and
three, you'll develop some Python scripts in a notebook to prepare your data and train a
machine learning model. Then, in part three, you'll run those Python scripts inside the
database using T-SQL stored procedures.

In this article, you'll learn how to:

" Import a sample database

In part two, you'll learn how to load the data from a database into a Python data frame,
and prepare the data in Python.

In part three, you'll learn how to train a linear regression model in Python.

In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.

Prerequisites
Azure SQL Managed Instance Machine Learning Services - For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
Python IDE - This tutorial uses a Python notebook in Azure Data Studio. For more
information, see How to use notebooks in Azure Data Studio.

SQL query tool - This tutorial assumes you're using Azure Data Studio.

Additional Python packages - The examples in this tutorial series use the following
Python packages that may not be installed by default:
pandas
pyodbc
sklearn

To install these packages:

1. In your Azure Data Studio notebook, select Manage Packages.


2. In the Manage Packages pane, select the Add new tab.
3. For each of the following packages, enter the package name, select Search,
then select Install.

As an alternative, you can open a Command Prompt, change to the installation


path for the version of Python you use in Azure Data Studio (for example, cd
%LocalAppData%\Programs\Python\Python37-32 ), then run pip install for each

package.

Restore the sample database


The sample database used in this tutorial has been saved to a .bak database backup file
for you to download and use.

1. Download the file TutorialDB.bak .

2. Follow the directions in Restore a database to Azure SQL Managed Instance in SQL
Server Management Studio, using these details:

Import from the TutorialDB.bak file you downloaded.


Name the target database TutorialDB .

3. You can verify that the restored database exists by querying the dbo.rental_data
table:

SQL

USE TutorialDB;

SELECT * FROM [dbo].[rental_data];

Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.

Next steps
In part one of this tutorial series, you completed these steps:

Installed the prerequisites


Import a sample database

To prepare the data from the TutorialDB database, follow part two of this tutorial series:

Python Tutorial: Prepare data to train a linear regression model


Python Tutorial: Prepare data to train a
linear regression model with SQL
machine learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part two of this four-part tutorial series, you'll prepare data from a database using
Python. Later in this series, you'll use this data to train and deploy a linear regression
model in Python with Azure SQL Managed Instance Machine Learning Services.

In this article, you'll learn how to:

" Load the data from the database into a pandas data frame
" Prepare the data in Python by removing some columns

In part one, you learned how to restore the sample database.

In part three, you'll learn how to train a linear regression machine learning model in
Python.

In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.

Prerequisites
Part two of this tutorial assumes you have completed part one and its
prerequisites.

Explore and prepare the data


To use the data in Python, you'll load the data from the database into a pandas data
frame.

Create a new Python notebook in Azure Data Studio and run the script below.

The Python script below imports the dataset from the dbo.rental_data table in your
database to a pandas data frame df.
In the connection string, replace connection details as needed. To use Windows
authentication with an ODBC connection string, specify Trusted_Connection=Yes;
instead of the UID and PWD parameters.

Python

import pyodbc

import pandas

from sklearn.linear_model import LinearRegression

from sklearn.metrics import mean_squared_error

# Connection string to your SQL Server instance

conn_str = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server}; SERVER=


<server>; DATABASE=TutorialDB;UID=<username>;PWD=<password>')

query_str = 'SELECT Year, Month, Day, Rentalcount, Weekday, Holiday, Snow


FROM dbo.rental_data'

df = pandas.read_sql(sql=query_str, con=conn_str)

print("Data frame:", df)

You should see results similar to the following.

results

Data frame: Year Month Day Rentalcount WeekDay Holiday Snow

0 2014 1 20 445 2 1 0

1 2014 2 13 40 5 0 0

2 2013 3 10 456 1 0 0

3 2014 3 31 38 2 0 0

4 2014 4 24 23 5 0 0

.. ... ... ... ... ... ... ...

448 2013 2 19 57 3 0 1

449 2015 3 18 26 4 0 0

450 2015 3 24 29 3 0 1

451 2014 3 26 50 4 0 1

452 2015 12 6 377 1 0 1

[453 rows x 7 columns]

Filter the columns from the dataframe to remove ones we don't want to use in the
training. Rentalcount should not be included as it is the target of the predictions.

Python

columns = df.columns.tolist()

columns = [c for c in columns if c not in ["Year", "Rentalcount"]]

print("Training set:", test[columns])

Note the data the training set will have access to:

results

Training set: Month Day Weekday Holiday Snow

1 2 13 5 0 0

3 3 31 2 0 0

7 3 8 7 0 0

15 3 4 2 0 1

22 1 18 1 0 0

.. ... ... ... ... ...

416 4 13 1 0 1

421 1 21 3 0 1

438 2 19 4 0 1

441 2 3 3 0 1

447 1 4 6 0 1

[91 rows x 5 columns]

Next steps
In part two of this tutorial series, you completed these steps:

Load the data from the database into a pandas data frame
Prepare the data in Python by removing some columns

To train a machine learning model that uses data from the TutorialDB database, follow
part three of this tutorial series:

Python Tutorial: Train a linear regression model


Python tutorial: Train a linear regression
model with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part three of this four-part tutorial series, you'll train a linear regression model in
Python. In the next part of this series, you'll deploy this model in an Azure SQL Managed
Instance database with Machine Learning Services.

In this article, you'll learn how to:

" Train a linear regression model


" Make predictions using the linear regression model

In part one, you learned how to restore the sample database.

In part two, you learned how to load the data from a database into a Python data frame,
and prepare the data in Python.

In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run in on the server to make predictions based on new data.

Prerequisites
Part three of this tutorial assumes you have completed part one and its
prerequisites.

Train the model


In order to predict, you have to find a function (model) that best describes the
dependency between the variables in our dataset. This called training the model. The
training dataset will be a subset of the entire dataset from the pandas data frame df
that you created in part two of this series.

You will train model lin_model using a linear regression algorithm.

Python

# Store the variable we'll be predicting on.

target = "Rentalcount"

# Generate the training set. Set random_state to be able to replicate


results.

train = df.sample(frac=0.8, random_state=1)

# Select anything not in the training set and put it in the testing set.

test = df.loc[~df.index.isin(train.index)]

# Print the shapes of both sets.

print("Training set shape:", train.shape)

print("Testing set shape:", test.shape)

# Initialize the model class.

lin_model = LinearRegression()

# Fit the model to the training data.

lin_model.fit(train[columns], train[target])

You should see results similar to the following.

results

Training set shape: (362, 7)

Testing set shape: (91, 7)

Make predictions
Use a predict function to predict the rental counts using the model lin_model .

Python

# Generate our predictions for the test set.

lin_predictions = lin_model.predict(test[columns])

print("Predictions:", lin_predictions)

# Compute error between our test predictions and the actual values.

lin_mse = mean_squared_error(lin_predictions, test[target])

print("Computed error:", lin_mse)

You should see results similar to the following.

results

Predictions: [124.41293228 123.8095075 117.67253182 209.39332151


135.46159387

199.50603805 472.14918499 90.15781602 216.61319499 120.30710327

89.47591091 127.71290441 207.44065517 125.68466139 201.38119194

204.29377218 127.4494643 113.42721447 127.37388762 94.66754136

90.21979191 173.86647615 130.34747586 111.81550069 118.88131715

124.74028405 211.95038051 202.06309706 123.53053083 167.06313191

206.24643852 122.64812937 179.98791527 125.1558454 168.00847713

120.2305587 196.60802649 117.00616326 173.20010759 89.9563518

92.11048236 120.91052805 175.47818992 129.65196995 120.97443971

175.95863082 127.24800008 135.05866542 206.49627783 91.63004147

115.78280925 208.92841718 213.5137192 212.83278197 96.74415948

95.1324457 199.9089665 206.10791806 126.16510228 120.0281266

209.08150631 132.88996619 178.84110582 128.85971386 124.67637239

115.58134503 96.82167192 514.61789505 125.48319717 207.50359894

121.64080826 201.9381774 113.22575025 202.46505762 90.7002328

92.31194658 201.25627228 516.97252195 91.36660136 599.27093251

199.6445585 123.66905128 117.4710676 173.12259514 129.60359486

209.59478573 206.29481361 210.69322009 205.50255751 210.88011563

207.65572019]

Computed error: 35003.54030828391

Next steps
In part three of this tutorial series, you completed these steps:

Train a linear regression model


Make predictions using the linear regression model

To deploy the machine learning model you've created, follow part four of this tutorial
series:

Python Tutorial: Deploy a machine learning model


Python Tutorial: Deploy a linear
regression model with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part four of this four-part tutorial series, you'll deploy a linear regression model
developed in Python into an Azure SQL Managed Instance database using Machine
Learning Services.

In this article, you'll learn how to:

" Create a stored procedure that generates the machine learning model


" Store the model in a database table
" Create a stored procedure that makes predictions using the model
" Execute the model with new data

In part one, you learned how to restore the sample database.

In part two, you learned how to load the data from a database into a Python data frame,
and prepare the data in Python.

In part three, you learned how to train a linear regression machine learning model in
Python.

Prerequisites
Part four of this tutorial assumes you have completed part one and its
prerequisites.

Create a stored procedure that generates the


model
Now, using the Python scripts you developed, create a stored procedure
generate_rental_py_model that trains and generates the linear regression model using
LinearRegression from scikit-learn.

Run the following T-SQL statement in Azure Data Studio to create the stored procedure
to train the model.
SQL

-- Stored procedure that trains and generates a Python model using the
rental_data and a linear regression algorithm

DROP PROCEDURE IF EXISTS generate_rental_py_model;

go

CREATE PROCEDURE generate_rental_py_model (@trained_model varbinary(max)


OUTPUT)

AS

BEGIN

EXECUTE sp_execute_external_script

@language = N'Python'

, @script = N'

from sklearn.linear_model import LinearRegression

import pickle

df = rental_train_data

# Get all the columns from the dataframe.

columns = df.columns.tolist()

# Store the variable well be predicting on.

target = "RentalCount"

# Initialize the model class.

lin_model = LinearRegression()

# Fit the model to the training data.

lin_model.fit(df[columns], df[target])

# Before saving the model to the DB table, convert it to a binary object

trained_model = pickle.dumps(lin_model)'

, @input_data_1 = N'select "RentalCount", "Year", "Month", "Day", "WeekDay",


"Snow", "Holiday" from dbo.rental_data where Year < 2015'

, @input_data_1_name = N'rental_train_data'

, @params = N'@trained_model varbinary(max) OUTPUT'

, @trained_model = @trained_model OUTPUT;

END;

GO

Store the model in a database table


Create a table in the TutorialDB database and then save the model to the table.

1. Run the following T-SQL statement in Azure Data Studio to create a table called
dbo.rental_py_models which is used to store the model.

SQL
USE TutorialDB;

DROP TABLE IF EXISTS dbo.rental_py_models;

GO

CREATE TABLE dbo.rental_py_models (

model_name VARCHAR(30) NOT NULL DEFAULT('default model') PRIMARY


KEY,

model VARBINARY(MAX) NOT NULL

);

GO

2. Save the model to the table as a binary object, with the model name linear_model.

SQL

DECLARE @model VARBINARY(MAX);

EXECUTE generate_rental_py_model @model OUTPUT;

INSERT INTO rental_py_models (model_name, model) VALUES('linear_model',


@model);

Create a stored procedure that makes


predictions
1. Create a stored procedure py_predict_rentalcount that makes predictions using
the trained model and a set of new data. Run the T-SQL below in Azure Data
Studio.

SQL

DROP PROCEDURE IF EXISTS py_predict_rentalcount;

GO

CREATE PROCEDURE py_predict_rentalcount (@model varchar(100))

AS

BEGIN

DECLARE @py_model varbinary(max) = (select model from


rental_py_models where model_name = @model);

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

# Import the scikit-learn function to compute error.

from sklearn.metrics import mean_squared_error

import pickle

import pandas

rental_model = pickle.loads(py_model)

df = rental_score_data

# Get all the columns from the dataframe.

columns = df.columns.tolist()

# Variable you will be predicting on.

target = "RentalCount"

# Generate the predictions for the test set.

lin_predictions = rental_model.predict(df[columns])

print(lin_predictions)

# Compute error between the test predictions and the actual values.

lin_mse = mean_squared_error(lin_predictions, df[target])

#print(lin_mse)

predictions_df = pandas.DataFrame(lin_predictions)

OutputDataSet = pandas.concat([predictions_df, df["RentalCount"],


df["Month"], df["Day"], df["WeekDay"], df["Snow"], df["Holiday"],
df["Year"]], axis=1)

'

, @input_data_1 = N'Select "RentalCount", "Year" ,"Month", "Day",


"WeekDay", "Snow", "Holiday" from rental_data where Year = 2015'

, @input_data_1_name = N'rental_score_data'

, @params = N'@py_model varbinary(max)'

, @py_model = @py_model

with result sets (("RentalCount_Predicted" float, "RentalCount" float,


"Month" float,"Day" float,"WeekDay" float,"Snow" float,"Holiday" float,
"Year" float));

END;

GO

2. Create a table for storing the predictions.

SQL

DROP TABLE IF EXISTS [dbo].[py_rental_predictions];

GO

CREATE TABLE [dbo].[py_rental_predictions](

[RentalCount_Predicted] [int] NULL,

[RentalCount_Actual] [int] NULL,

[Month] [int] NULL,

[Day] [int] NULL,

[WeekDay] [int] NULL,

[Snow] [int] NULL,

[Holiday] [int] NULL,

[Year] [int] NULL

) ON [PRIMARY]

GO

3. Execute the stored procedure to predict rental counts

SQL

--Insert the results of the predictions for test set into a table

INSERT INTO py_rental_predictions

EXEC py_predict_rentalcount 'linear_model';

-- Select contents of the table

SELECT * FROM py_rental_predictions;

You should see results similar to the following.

You have successfully created, trained, and deployed a model. You then used that model
in a stored procedure to predict values based on new data.

Next steps
In part four of this tutorial series, you completed these steps:

Create a stored procedure that generates the machine learning model


Store the model in a database table
Create a stored procedure that makes predictions using the model
Execute the model with new data

To learn more about using Python with SQL machine learning, see:

Python tutorials
Python tutorial: Categorizing customers
using k-means clustering with SQL
machine learning
Article • 04/17/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this four-part tutorial series, use Python to develop and deploy a K-Means clustering
model in Azure SQL Managed Instance Machine Learning Services to cluster customer
data.

In part one of this series, set up the prerequisites for the tutorial and then restore a
sample dataset to a database. Later in this series, use this data to train and deploy a
clustering model in Python with SQL machine learning.

In parts two and three of this series, develop some Python scripts in an Azure Data
Studio notebook to analyze and prepare your data and train a machine learning model.
Then, in part four, run those Python scripts inside a database using stored procedures.

Clustering can be explained as organizing data into groups where members of a group
are similar in some way. For this tutorial series, imagine you own a retail business. Use
the K-Means algorithm to perform the clustering of customers in a dataset of product
purchases and returns. By clustering customers, you can focus your marketing efforts
more effectively by targeting specific groups. K-Means clustering is an unsupervised
learning algorithm that looks for patterns in data based on similarities.

In this article, learn how to:

" Restore a sample database

In part two, learn how to prepare the data from a database to perform clustering.

In part three, learn how to create and train a K-Means clustering model in Python.

In part four, learn how to create a stored procedure in a database that can perform
clustering in Python based on new data.

Prerequisites
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.

Azure Data Studio. use a notebook in Azure Data Studio for both Python and SQL.
For more information about notebooks, see How to use notebooks in Azure Data
Studio.

Additional Python packages - The examples in this tutorial series use Python
packages that you may or may not have installed.

Open an Administrative Command Prompt and change to the installation path for
the version of Python you use in Azure Data Studio. For example, cd
%LocalAppData%\Programs\Python\Python37-32 . Then run the following commands to

install any of these packages that aren't already installed. Ensure these packages
are installed in the correct Python installation location. You can use the option -t
to specify the destination directory.

Console

pip install matplotlib

pip install pandas

pip install pyodbc

pip install scipy

pip install scikit-learn

Restore the sample database


The sample dataset used in this tutorial has been saved to a .bak database backup file
for you to download and use. This dataset is derived from the tpcx-bb dataset
provided by the Transaction Processing Performance Council (TPC) .

1. Download the file tpcxbb_1gb.bak .

2. Follow the directions in Restore a database to a SQL Managed Instance in SQL


Server Management Studio, using these details:

Import from the tpcxbb_1gb.bak file you downloaded


Name the target database "tpcxbb_1gb"

3. You can verify that the dataset exists after you have restored the database by
querying the dbo.customer table:

SQL
USE tpcxbb_1gb;

SELECT * FROM [dbo].[customer];

Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.

Next steps
In part one of this tutorial series, you completed these steps:

Restore a sample database

To prepare the data for the machine learning model, follow part two of this tutorial
series:

Python tutorial: Prepare data to perform clustering


Python tutorial: Prepare data to
categorize customers with SQL machine
learning
Article • 04/17/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part two of this four-part tutorial series, you'll restore and prepare the data from a
database using Python. Later in this series, you'll use this data to train and deploy a
clustering model in Python with Azure SQL Managed Instance Machine Learning
Services.

In this article, you'll learn how to:

" Separate customers along different dimensions using Python


" Load the data from the database into a Python data frame

In part one, you installed the prerequisites and restored the sample database.

In part three, you'll learn how to create and train a K-Means clustering model in Python.

In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in Python based on new data.

Prerequisites
Part two of this tutorial assumes you have fulfilled the prerequisites of part one.

Separate customers
To prepare for clustering customers, you'll first separate customers along the following
dimensions:

orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency
Open a new notebook in Azure Data Studio and enter the following script.

In the connection string, replace connection details as needed.

Python

# Load packages.

import pyodbc

import matplotlib.pyplot as plt

import numpy as np

import pandas as pd

from scipy.spatial import distance as sci_distance

from sklearn import cluster as sk_cluster

############################################################################
####################

## Connect to DB and select data

############################################################################
####################

# Connection string to connect to SQL Server named instance.

conn_str = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server}; SERVER=


<server>; DATABASE=tpcxbb_1gb; UID=<username>; PWD=<password>')

input_query = '''SELECT

ss_customer_sk AS customer,

ROUND(COALESCE(returns_count / NULLIF(1.0*orders_count, 0), 0), 7) AS


orderRatio,

ROUND(COALESCE(returns_items / NULLIF(1.0*orders_items, 0), 0), 7) AS


itemsRatio,

ROUND(COALESCE(returns_money / NULLIF(1.0*orders_money, 0), 0), 7) AS


monetaryRatio,

COALESCE(returns_count, 0) AS frequency

FROM

SELECT

ss_customer_sk,

-- return order ratio

COUNT(distinct(ss_ticket_number)) AS orders_count,

-- return ss_item_sk ratio

COUNT(ss_item_sk) AS orders_items,

-- return monetary amount ratio

SUM( ss_net_paid ) AS orders_money

FROM store_sales s

GROUP BY ss_customer_sk

) orders

LEFT OUTER JOIN

SELECT

sr_customer_sk,

-- return order ratio

count(distinct(sr_ticket_number)) as returns_count,

-- return ss_item_sk ratio

COUNT(sr_item_sk) as returns_items,

-- return monetary amount ratio

SUM( sr_return_amt ) AS returns_money

FROM store_returns

GROUP BY sr_customer_sk ) returned ON ss_customer_sk=sr_customer_sk'''

# Define the columns we wish to import.

column_info = {

"customer": {"type": "integer"},

"orderRatio": {"type": "integer"},

"itemsRatio": {"type": "integer"},

"frequency": {"type": "integer"}

Load the data into a data frame


Results from the query are returned to Python using the Pandas read_sql function. As
part of the process, you'll use the column information you defined in the previous script.

Python

customer_data = pd.read_sql(input_query, conn_str)

Now display the beginning of the data frame to verify it looks correct.

Python

print("Data frame:", customer_data.head(n=5))

results

Rows Read: 37336, Total Rows Processed: 37336, Total Chunk Time: 0.172
seconds

Data frame: customer orderRatio itemsRatio monetaryRatio frequency

0 29727.0 0.000000 0.000000 0.000000 0

1 97643.0 0.068182 0.078176 0.037034 3

2 57247.0 0.000000 0.000000 0.000000 0

3 32549.0 0.086957 0.068657 0.031281 4

4 2040.0 0.000000 0.000000 0.000000 0

Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part two of this tutorial series, you completed these steps:

Separate customers along different dimensions using Python


Load the data from the database into a Python data frame

To create a machine learning model that uses this customer data, follow part three of
this tutorial series:

Python tutorial: Create a predictive model


Python tutorial: Build a model to
categorize customers with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part three of this four-part tutorial series, you'll build a K-Means model in Python to
perform clustering. In the next part of this series, you'll deploy this model in a database
with Azure SQL Managed Instance Machine Learning Services.

In this article, you'll learn how to:

" Define the number of clusters for a K-Means algorithm


" Perform clustering
" Analyze the results

In part one, you installed the prerequisites and restored the sample database.

In part two, you learned how to prepare the data from a database to perform clustering.

In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in Python based on new data.

Prerequisites
Part three of this tutorial assumes you have fulfilled the prerequisites of part one,
and completed the steps in part two.

Define the number of clusters


To cluster your customer data, you'll use the K-Means clustering algorithm, one of the
simplest and most well-known ways of grouping data.
You can read more about K-
Means in A complete guide to K-means clustering algorithm .

The algorithm accepts two inputs: The data itself, and a predefined number "k"
representing the number of clusters to generate.
The output is k clusters with the input
data partitioned among the clusters.
The goal of K-means is to group the items into k clusters such that all items in same
cluster are as similar to each other, and as different from items in other clusters, as
possible.

To determine the number of clusters for the algorithm to use, use a plot of the within
groups sum of squares, by number of clusters extracted. The appropriate number of
clusters to use is at the bend or "elbow" of the plot.

Python

############################################################################
####################

## Determine number of clusters using the Elbow method

############################################################################
####################

cdata = customer_data

K = range(1, 20)

KM = (sk_cluster.KMeans(n_clusters=k).fit(cdata) for k in K)

centroids = (k.cluster_centers_ for k in KM)

D_k = (sci_distance.cdist(cdata, cent, 'euclidean') for cent in centroids)

dist = (np.min(D, axis=1) for D in D_k)

avgWithinSS = [sum(d) / cdata.shape[0] for d in dist]

plt.plot(K, avgWithinSS, 'b*-')

plt.grid(True)

plt.xlabel('Number of clusters')

plt.ylabel('Average within-cluster sum of squares')

plt.title('Elbow for KMeans clustering')

plt.show()

Based on the graph, it looks like k = 4 would be a good value to try. That k value will
group the customers into four clusters.

Perform clustering
In the following Python script, you'll use the KMeans function from the sklearn package.

Python

############################################################################
####################

## Perform clustering using Kmeans

############################################################################
####################

# It looks like k=4 is a good number to use based on the elbow graph.

n_clusters = 4

means_cluster = sk_cluster.KMeans(n_clusters=n_clusters, random_state=111)

columns = ["orderRatio", "itemsRatio", "monetaryRatio", "frequency"]

est = means_cluster.fit(customer_data[columns])

clusters = est.labels_

customer_data['cluster'] = clusters

# Print some data about the clusters:

# For each cluster, count the members.

for c in range(n_clusters):

cluster_members=customer_data[customer_data['cluster'] == c][:]

print('Cluster{}(n={}):'.format(c, len(cluster_members)))

print('-'* 17)

print(customer_data.groupby(['cluster']).mean())

Analyze the results


Now that you've performed clustering using K-Means, the next step is to analyze the
result and see if you can find any actionable information.

Look at the clustering mean values and cluster sizes printed from the previous script.

results

Cluster0(n=31675):

-------------------

Cluster1(n=4989):

-------------------

Cluster2(n=1):

-------------------

Cluster3(n=671):

-------------------

customer orderRatio itemsRatio monetaryRatio frequency

cluster

0 50854.809882 0.000000 0.000000 0.000000 0.000000

1 51332.535779 0.721604 0.453365 0.307721 1.097815

2 57044.000000 1.000000 2.000000 108.719154 1.000000

3 48516.023845 0.136277 0.078346 0.044497 4.271237

The four cluster means are given using the variables defined in part one:

orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency

Data mining using K-Means often requires further analysis of the results, and further
steps to better understand each cluster, but it can provide some good leads.
Here are a
couple ways you could interpret these results:
Cluster 0 seems to be a group of customers that are not active (all values are zero).
Cluster 3 seems to be a group that stands out in terms of return behavior.

Cluster 0 is a set of customers who are clearly not active. Perhaps you can target
marketing efforts towards this group to trigger an interest for purchases. In the next
step, you'll query the database for the email addresses of customers in cluster 0, so that
you can send a marketing email to them.

Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.

Next steps
In part three of this tutorial series, you completed these steps:

Define the number of clusters for a K-Means algorithm


Perform clustering
Analyze the results

To deploy the machine learning model you've created, follow part four of this tutorial
series:

Python tutorial: Deploy a clustering model


Python tutorial: Deploy a model to
categorize customers with SQL machine
learning
Article • 04/17/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part four of this four-part tutorial series, you'll deploy a clustering model, developed
in Python, into a database using Azure SQL Managed Instance Machine Learning
Services.

In order to perform clustering on a regular basis, as new customers are registering, you
need to be able call the Python script from any App. To do that, you can deploy the
Python script in a database by putting the Python script inside a SQL stored procedure.
Because your model executes in the database, it can easily be trained against data
stored in the database.

In this section, you'll move the Python code you just wrote onto the server and deploy
clustering.

In this article, you'll learn how to:

" Create a stored procedure that generates the model


" Perform clustering on the server
" Use the clustering information

In part one, you installed the prerequisites and restored the sample database.

In part two, you learned how to prepare the data from a database to perform clustering.

In part three, you learned how to create and train a K-Means clustering model in Python.

Prerequisites
Part four of this tutorial series assumes you have fulfilled the prerequisites of part
one, and completed the steps in part two and part three.

Create a stored procedure that generates the


model
Run the following T-SQL script to create the stored procedure. The procedure recreates
the steps you developed in parts one and two of this tutorial series:

classify customers based on their purchase and return history


generate four clusters of customers using a K-Means algorithm

SQL

USE [tpcxbb_1gb]

GO

DROP procedure IF EXISTS [dbo].[py_generate_customer_return_clusters];

GO

CREATE procedure [dbo].[py_generate_customer_return_clusters]

AS

BEGIN

DECLARE

-- Input query to generate the purchase history & return metrics

@input_query NVARCHAR(MAX) = N'

SELECT

ss_customer_sk AS customer,

CAST( (ROUND(COALESCE(returns_count / NULLIF(1.0*orders_count, 0), 0), 7)


) AS FLOAT) AS orderRatio,

CAST( (ROUND(COALESCE(returns_items / NULLIF(1.0*orders_items, 0), 0), 7)


) AS FLOAT) AS itemsRatio,

CAST( (ROUND(COALESCE(returns_money / NULLIF(1.0*orders_money, 0), 0), 7)


) AS FLOAT) AS monetaryRatio,

CAST( (COALESCE(returns_count, 0)) AS FLOAT) AS frequency

FROM

SELECT

ss_customer_sk,

-- return order ratio

COUNT(distinct(ss_ticket_number)) AS orders_count,

-- return ss_item_sk ratio

COUNT(ss_item_sk) AS orders_items,

-- return monetary amount ratio

SUM( ss_net_paid ) AS orders_money

FROM store_sales s

GROUP BY ss_customer_sk

) orders

LEFT OUTER JOIN

SELECT

sr_customer_sk,

-- return order ratio

count(distinct(sr_ticket_number)) as returns_count,

-- return ss_item_sk ratio

COUNT(sr_item_sk) as returns_items,

-- return monetary amount ratio

SUM( sr_return_amt ) AS returns_money

FROM store_returns

GROUP BY sr_customer_sk

) returned ON ss_customer_sk=sr_customer_sk

'

EXEC sp_execute_external_script

@language = N'Python'

, @script = N'

import pandas as pd

from sklearn.cluster import KMeans

#get data from input query

customer_data = my_input_data

#We concluded in step 2 in the tutorial that 4 would be a good number of


clusters

n_clusters = 4

#Perform clustering

est = KMeans(n_clusters=n_clusters,
random_state=111).fit(customer_data[["orderRatio","itemsRatio","monetaryRati
o","frequency"]])

clusters = est.labels_

customer_data["cluster"] = clusters

OutputDataSet = customer_data

'

, @input_data_1 = @input_query

, @input_data_1_name = N'my_input_data'

with result sets (("Customer" int, "orderRatio"


float,"itemsRatio" float,"monetaryRatio" float,"frequency" float,"cluster"
float));

END;

GO

Perform clustering
Now that you've created the stored procedure, execute the following script to perform
clustering using the procedure.

SQL

--Create a table to store the predictions in

DROP TABLE IF EXISTS [dbo].[py_customer_clusters];

GO

CREATE TABLE [dbo].[py_customer_clusters] (

[Customer] [bigint] NULL

, [OrderRatio] [float] NULL

, [itemsRatio] [float] NULL

, [monetaryRatio] [float] NULL

, [frequency] [float] NULL

, [cluster] [int] NULL

) ON [PRIMARY]

GO

--Execute the clustering and insert results into table

INSERT INTO py_customer_clusters

EXEC [dbo].[py_generate_customer_return_clusters];

-- Select contents of the table to verify it works

SELECT * FROM py_customer_clusters;

Use the clustering information


Because you stored the clustering procedure in the database, it can perform clustering
efficiently against customer data stored in the same database. You can execute the
procedure whenever your customer data is updated and use the updated clustering
information.

Suppose you want to send a promotional email to customers in cluster 0, the group that
was inactive (you can see how the four clusters were described in part three of this
tutorial). The following code selects the email addresses of customers in cluster 0.

SQL

USE [tpcxbb_1gb]

--Get email addresses of customers in cluster 0 for a promotion campaign

SELECT customer.[c_email_address], customer.c_customer_sk

FROM dbo.customer

JOIN

[dbo].[py_customer_clusters] as c

ON c.Customer = customer.c_customer_sk

WHERE c.cluster = 0

You can change the c.cluster value to return email addresses for customers in other
clusters.

Clean up resources
When you're finished with this tutorial, you can delete the tpcxbb_1gb database.
Next steps
In part four of this tutorial series, you completed these steps:

Create a stored procedure that generates the model


Perform clustering on the server
Use the clustering information

To learn more about using Python in SQL machine learning, see:

Quickstart: Create and run simple Python scripts


Other Python tutorials for SQL machine learning
Install Python packages with sqlmlutils
Python tutorial: Predict NYC taxi fares
with binary classification
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In this five-part tutorial series for SQL programmers, you'll learn about Python
integration in Machine Learning Services in Azure SQL Managed Instance.

You'll build and deploy a Python-based machine learning solution using a sample
database on SQL Server. You'll use T-SQL, Azure Data Studio or SQL Server Management
Studio, and a database instance with SQL machine learning and Python language
support.

This tutorial series introduces you to Python functions used in a data modeling
workflow. Parts include data exploration, building and training a binary classification
model, and model deployment. You'll use sample data from the New York City Taxi and
Limousine Commission. The model you'll build predicts whether a trip is likely to result
in a tip based on the time of day, distance traveled, and pick-up location.

In the first part of this series, you'll install the prerequisites and restore the sample
database. In parts two and three, you'll develop some Python scripts to prepare your
data and train a machine learning model. Then, in parts four and five, you'll run those
Python scripts inside the database using T-SQL stored procedures.

In this article, you'll:

" Install prerequisites
" Restore the sample database

In part two, you'll explore the sample data and generate some plots.

In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.

In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
7 Note

This tutorial is available in both R and Python. For the R version, see R tutorial:
Predict NYC taxi fares with binary classification.

Prerequisites
Grant permissions to execute Python scripts

Restore the NYC Taxi demo database

All tasks can be done using Transact-SQL stored procedures in Azure Data Studio or
Management Studio.

This tutorial series assumes familiarity with basic database operations such as creating
databases and tables, importing data, and writing SQL queries. It does not assume you
know Python and all Python code is provided.

Background for SQL developers


The process of building a machine learning solution is a complex one that can involve
multiple tools, and the coordination of subject matter experts across several phases:

obtaining and cleaning data


exploring the data and building features useful for modeling
training and tuning the model
deployment to production

Development and testing of the actual code is best performed using a dedicated
development environment. However, after the script is fully tested, you can easily deploy
it to SQL Server using Transact-SQL stored procedures in the familiar environment of
Azure Data Studio or Management Studio. Wrapping external code in stored procedures
is the primary mechanism for operationalizing code in SQL Server.

After the model has been saved to the database, you can call the model for predictions
from Transact-SQL by using stored procedures.

Whether you're a SQL programmer new to Python, or a Python developer new to SQL,
this five-part tutorial series introduces a typical workflow for conducting in-database
analytics with Python and SQL Server.
Next steps
In this article, you:

" Installed prerequisites
" Restored the sample database

Python tutorial: Explore and visualize data


Python tutorial: Explore and visualize
data
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part two of this five-part tutorial series, you'll explore the sample data and generate
some plots. Later, you'll learn how to serialize graphics objects in Python, and then
deserialize those objects and make plots.

In this article, you'll:

" Review the sample data


" Create plots using Python in T-SQL

In part one, you installed the prerequisites and restored the sample database.

In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.

In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

Review the data


First, take a minute to browse the data schema, as we've made some changes to make it
easier to use the NYC Taxi data

The original dataset used separate files for the taxi identifiers and trip records.
We've joined the two original datasets on the columns medallion, hack_license, and
pickup_datetime.
The original dataset spanned many files and was quite large. We've downsampled
to get just 1% of the original number of records. The current data table has
1,703,957 rows and 23 columns.

Taxi identifiers

The medallion column represents the taxi's unique ID number.


The hack_license column contains the taxi driver's license number (anonymized).

Trip and fare records

Each trip record includes the pickup and drop-off location and time, and the trip
distance.

Each fare record includes payment information such as the payment type, total amount
of payment, and the tip amount.

The last three columns can be used for various machine learning tasks. The tip_amount
column contains continuous numeric values and can be used as the label column for
regression analysis. The tipped column has only yes/no values and is used for binary
classification. The tip_class column has multiple class labels and therefore can be used
as the label for multi-class classification tasks.

The values used for the label columns are all based on the tip_amount column, using
these business rules:

Label column tipped has possible values 0 and 1

If tip_amount > 0, tipped = 1; otherwise tipped = 0

Label column tip_class has possible class values 0-4

Class 0: tip_amount = $0

Class 1: tip_amount > $0 and tip_amount <= $5

Class 2: tip_amount > $5 and tip_amount <= $10

Class 3: tip_amount > $10 and tip_amount <= $20

Class 4: tip_amount > $20

Create plots using Python in T-SQL


Developing a data science solution usually includes intensive data exploration and data
visualization. Because visualization is such a powerful tool for understanding the
distribution of the data and outliers, Python provides many packages for visualizing
data. The matplotlib module is one of the more popular libraries for visualization, and
includes many functions for creating histograms, scatter plots, box plots, and other data
exploration graphs.
In this section, you learn how to work with plots using stored procedures. Rather than
open the image on the server, you store the Python object plot as varbinary data, and
then write that to a file that can be shared or viewed elsewhere.

Create a plot as varbinary data


The stored procedure returns a serialized Python figure object as a stream of varbinary
data. You cannot view the binary data directly, but you can use Python code on the
client to deserialize and view the figures, and then save the image file on a client
computer.

1. Create the stored procedure PyPlotMatplotlib.

In the following script:

The variable @query defines the query text SELECT tipped FROM
nyctaxi_sample , which is passed to the Python code block as the argument to
the script input variable, @input_data_1 .
The Python script is fairly simple: matplotlib figure objects are used to make
the histogram and scatter plot, and these objects are then serialized using the
pickle library.

The Python graphics object is serialized to a pandas DataFrame for output.

SQL

DROP PROCEDURE IF EXISTS PyPlotMatplotlib;

GO

CREATE PROCEDURE [dbo].[PyPlotMatplotlib]

AS

BEGIN

SET NOCOUNT ON;

DECLARE @query nvarchar(max) =

N'SELECT cast(tipped as int) as tipped, tip_amount, fare_amount


FROM [dbo].[nyctaxi_sample]'

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

import matplotlib

matplotlib.use("Agg")

import matplotlib.pyplot as plt

import pandas as pd

import pickle

fig_handle = plt.figure()

plt.hist(InputDataSet.tipped)

plt.xlabel("Tipped")

plt.ylabel("Counts")

plt.title("Histogram, Tipped")

plot0 = pd.DataFrame(data =[pickle.dumps(fig_handle)], columns =


["plot"])

plt.clf()

plt.hist(InputDataSet.tip_amount)
plt.xlabel("Tip amount ($)")

plt.ylabel("Counts")

plt.title("Histogram, Tip amount")

plot1 = pd.DataFrame(data =[pickle.dumps(fig_handle)], columns =


["plot"])

plt.clf()

plt.hist(InputDataSet.fare_amount)

plt.xlabel("Fare amount ($)")

plt.ylabel("Counts")

plt.title("Histogram, Fare amount")

plot2 = pd.DataFrame(data =[pickle.dumps(fig_handle)], columns =


["plot"])

plt.clf()

plt.scatter( InputDataSet.fare_amount, InputDataSet.tip_amount)

plt.xlabel("Fare Amount ($)")

plt.ylabel("Tip Amount ($)")

plt.title("Tip amount by Fare amount")

plot3 = pd.DataFrame(data =[pickle.dumps(fig_handle)], columns =


["plot"])

plt.clf()

OutputDataSet = plot0.append(plot1, ignore_index=True).append(plot2,


ignore_index=True).append(plot3, ignore_index=True)

',

@input_data_1 = @query

WITH RESULT SETS ((plot varbinary(max)))

END

GO

2. Now run the stored procedure with no arguments to generate a plot from the data
hard-coded as the input query.

SQL

EXEC [dbo].[PyPlotMatplotlib]

3. The results should be something like this:

SQL

plot

0xFFD8FFE000104A4649...

0xFFD8FFE000104A4649...

0xFFD8FFE000104A4649...

0xFFD8FFE000104A4649...

4. From a Python client, you can now connect to the SQL Server instance that
generated the binary plot objects, and view the plots.

To do this, run the following Python code, replacing the server name, database
name, and credentials as appropriate (for Windows authentication, replace the UID
and PWD parameters with Trusted_Connection=True ). Make sure the Python version
is the same on the client and the server. Also make sure that the Python libraries
on your client (such as matplotlib) are the same or higher version relative to the
libraries installed on the server. To view a list of installed packages and their
versions, see Get Python package information.

Python

%matplotlib notebook

import pyodbc

import pickle

import os

cnxn = pyodbc.connect('DRIVER=SQL Server;SERVER={SERVER_NAME};DATABASE=


{DB_NAME};UID={USER_NAME};PWD={PASSWORD}')

cursor = cnxn.cursor()

cursor.execute("EXECUTE [dbo].[PyPlotMatplotlib]")

tables = cursor.fetchall()

for i in range(0, len(tables)):

fig = pickle.loads(tables[i][0])

fig.savefig(str(i)+'.png')

print("The plots are saved in directory: ",os.getcwd())

5. If the connection is successful, you should see a message like the following:

The plots are saved in directory: xxxx

6. The output file is created in the Python working directory. To view the plot, locate
the Python working directory, and open the file. The following image shows a plot
saved on the client computer.
Next steps
In this article, you:

" Reviewed the sample data


" Created plots using Python in T-SQL

Python tutorial: Create Data Features using T-SQL


Python tutorial: Create Data Features
using T-SQL
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part three of this five-part tutorial series, you'll learn how to create features from raw
data by using a Transact-SQL function. You'll then call that function from a SQL stored
procedure to create a table that contains the feature values.

The process of feature engineering, creating features from the raw data, can be a critical
step in advanced analytics modeling.

In this article, you'll:

" Modify a custom function to calculate trip distance


" Save the features using another custom function

In part one, you installed the prerequisites and restored the sample database.

In part two, you explored the sample data and generated some plots.

In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

Define the Function


The distance values reported in the original data are based on the reported meter
distance, and don't necessarily represent geographical distance or distance traveled.
Therefore, you'll need to calculate the direct distance between the pick-up and drop-off
points, by using the coordinates available in the source NYC Taxi dataset. You can do this
by using the Haversine formula in a custom Transact-SQL function.

You'll use one custom T-SQL function, fnCalculateDistance, to compute the distance
using the Haversine formula, and use a second custom T-SQL function,
fnEngineerFeatures, to create a table containing all the features.

Calculate trip distance using fnCalculateDistance


The function fnCalculateDistance is included in the sample database. Take a minute to
review the code:

1. In Management Studio, expand Programmability, expand Functions and then


Scalar-valued functions.

2. Right-click fnCalculateDistance, and select Modify to open the Transact-SQL script


in a new query window.

It should look something like this:

SQL

CREATE FUNCTION [dbo].[fnCalculateDistance] (@Lat1 float, @Long1 float,


@Lat2 float, @Long2 float)

-- User-defined function that calculates the direct distance between


two geographical coordinates

RETURNS float

AS

BEGIN

DECLARE @distance decimal(28, 10)

-- Convert to radians

SET @Lat1 = @Lat1 / 57.2958

SET @Long1 = @Long1 / 57.2958

SET @Lat2 = @Lat2 / 57.2958

SET @Long2 = @Long2 / 57.2958

-- Calculate distance

SET @distance = (SIN(@Lat1) * SIN(@Lat2)) + (COS(@Lat1) * COS(@Lat2)


* COS(@Long2 - @Long1))

--Convert to miles

IF @distance <> 0

BEGIN

SET @distance = 3958.75 * ATAN(SQRT(1 - POWER(@distance, 2)) /


@distance);

END

RETURN @distance

END

GO

Notes:

The function is a scalar-valued function, returning a single data value of a


predefined type.
The function takes latitude and longitude values as inputs, obtained from trip pick-
up and drop-off locations. The Haversine formula converts locations to radians and
uses those values to compute the direct distance in miles between those two
locations.
Save the features using fnEngineerFeatures
To add the computed value to a table that can be used for training the model, you'll use
the custom T-SQL function, fnEngineerFeatures. This function is a table-valued function
that takes multiple columns as inputs, and outputs a table with multiple feature
columns. The purpose of this function is to create a feature set for use in building a
model. The function fnEngineerFeatures calls the previously created T-SQL function,
fnCalculateDistance, to get the direct distance between pickup and dropoff locations.

Take a minute to review the code:

SQL

CREATE FUNCTION [dbo].[fnEngineerFeatures] (

@passenger_count int = 0,

@trip_distance float = 0,

@trip_time_in_secs int = 0,

@pickup_latitude float = 0,

@pickup_longitude float = 0,

@dropoff_latitude float = 0,

@dropoff_longitude float = 0)

RETURNS TABLE

AS

RETURN

-- Add the SELECT statement with parameter references here

SELECT

@passenger_count AS passenger_count,

@trip_distance AS trip_distance,

@trip_time_in_secs AS trip_time_in_secs,

[dbo].[fnCalculateDistance](@pickup_latitude, @pickup_longitude,
@dropoff_latitude, @dropoff_longitude) AS direct_distance

GO

To verify that this function works, you can use it to calculate the geographical distance
for those trips where the metered distance was 0 but the pick-up and drop-off locations
were different.

SQL

SELECT tipped, fare_amount, passenger_count,(trip_time_in_secs/60) as


TripMinutes,

trip_distance, pickup_datetime, dropoff_datetime,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) AS direct_distance

FROM nyctaxi_sample

WHERE pickup_longitude != dropoff_longitude and pickup_latitude !=


dropoff_latitude and trip_distance = 0

ORDER BY trip_time_in_secs DESC

As you can see, the distance reported by the meter doesn't always correspond to
geographical distance. This is why feature engineering is important.

In the next part, you'll learn how to use these data features to create and train a
machine learning model using Python.

Next steps
In this article, you:

" Modified a custom function to calculate trip distance


" Saved the features using another custom function

Python tutorial: Train and save a Python model using T-SQL


Python tutorial: Train and save a Python
model using T-SQL
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part four of this five-part tutorial series, you'll learn how to train a machine learning
model using the Python packages scikit-learn and revoscalepy. These Python libraries
are already installed with SQL Server machine learning.

You'll load the modules and call the necessary functions to create and train the model
using a SQL Server stored procedure. The model requires the data features you
engineered in earlier parts of this tutorial series. Finally, you'll save the trained model to
a SQL Server table.

In this article, you'll:

" Create and train a model using a SQL stored procedure


" Save the trained model to a SQL table

In part one, you installed the prerequisites and restored the sample database.

In part two, you explored the sample data and generated some plots.

In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

Split the sample data into training and testing


sets
1. Create a stored procedure called PyTrainTestSplit to divide the data in the
nyctaxi_sample table into two parts: nyctaxi_sample_training and
nyctaxi_sample_testing.

Run the following code to create it:

SQL
DROP PROCEDURE IF EXISTS PyTrainTestSplit;

GO

CREATE PROCEDURE [dbo].[PyTrainTestSplit] (@pct int)

AS

DROP TABLE IF EXISTS dbo.nyctaxi_sample_training

SELECT * into nyctaxi_sample_training FROM nyctaxi_sample WHERE


(ABS(CAST(BINARY_CHECKSUM(medallion,hack_license) as int)) % 100) <
@pct

DROP TABLE IF EXISTS dbo.nyctaxi_sample_testing

SELECT * into nyctaxi_sample_testing FROM nyctaxi_sample


WHERE (ABS(CAST(BINARY_CHECKSUM(medallion,hack_license) as int)) %
100) > @pct

GO

2. To divide your data using a custom split, run the stored procedure, and provide an
integer parameter that represents the percentage of data to allocate to the
training set. For example, the following statement would allocate 60% of data to
the training set.

SQL

EXEC PyTrainTestSplit 60

GO

Build a logistic regression model


After the data has been prepared, you can use it to train a model. You do this by calling
a stored procedure that runs some Python code, taking as input the training data table.
For this tutorial, you create two models, both binary classification models:

The stored procedure PyTrainScikit creates a tip prediction model using the scikit-
learn package.
The stored procedure TrainTipPredictionModelRxPy creates a tip prediction model
using the revoscalepy package.

Each stored procedure uses the input data you provide to create and train a logistic
regression model. All Python code is wrapped in the system stored procedure,
sp_execute_external_script.

To make it easier to retrain the model on new data, you wrap the call to
sp_execute_external_script in another stored procedure, and pass in the new training

data as a parameter. This section will walk you through that process.
PyTrainScikit
1. In Management Studio, open a new Query window and run the following
statement to create the stored procedure PyTrainScikit. The stored procedure
contains a definition of the input data, so you don't need to provide an input
query.

SQL

DROP PROCEDURE IF EXISTS PyTrainScikit;

GO

CREATE PROCEDURE [dbo].[PyTrainScikit] (@trained_model varbinary(max)


OUTPUT)

AS

BEGIN

EXEC sp_execute_external_script

@language = N'Python',

@script = N'

import numpy

import pickle

from sklearn.linear_model import LogisticRegression

##Create SciKit-Learn logistic regression model

X = InputDataSet[["passenger_count", "trip_distance",
"trip_time_in_secs", "direct_distance"]]

y = numpy.ravel(InputDataSet[["tipped"]])

SKLalgo = LogisticRegression()

logitObj = SKLalgo.fit(X, y)

##Serialize model

trained_model = pickle.dumps(logitObj)

',

@input_data_1 = N'

select tipped, fare_amount, passenger_count, trip_time_in_secs,


trip_distance,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance

from nyctaxi_sample_training

',

@input_data_1_name = N'InputDataSet',

@params = N'@trained_model varbinary(max) OUTPUT',

@trained_model = @trained_model OUTPUT;

END;

GO

2. Run the following SQL statements to insert the trained model into table
nyc_taxi_models.
SQL

DECLARE @model VARBINARY(MAX);

EXEC PyTrainScikit @model OUTPUT;


INSERT INTO nyc_taxi_models (name, model) VALUES('SciKit_model',
@model);

Processing of the data and fitting the model might take a couple of minutes.
Messages that would be piped to Python's stdout stream are displayed in the
Messages window of Management Studio. For example:

text

STDOUT message(s) from external script:

C:\Program Files\Microsoft SQL


Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\lib\site-
packages\revoscalepy

3. Open the table nyc_taxi_models. You can see that one new row has been added,
which contains the serialized model in the column model.

text

SciKit_model

0x800363736B6C6561726E2E6C696E6561....

TrainTipPredictionModelRxPy
This stored procedure uses the revoscalepy Python package. It contains objects,
transformation, and algorithms similar to those provided for the R language's
RevoScaleR package.

By using revoscalepy, you can create remote compute contexts, move data between
compute contexts, transform data, and train predictive models using popular algorithms
such as logistic and linear regression, decision trees, and more. For more information,
see revoscalepy module in SQL Server and revoscalepy function reference.

1. In Management Studio, open a new Query window and run the following
statement to create the stored procedure TrainTipPredictionModelRxPy. Because
the stored procedure already includes a definition of the input data, you don't
need to provide an input query.

SQL
DROP PROCEDURE IF EXISTS TrainTipPredictionModelRxPy;

GO

CREATE PROCEDURE [dbo].[TrainTipPredictionModelRxPy] (@trained_model


varbinary(max) OUTPUT)

AS

BEGIN

EXEC sp_execute_external_script

@language = N'Python',

@script = N'

import numpy

import pickle

from revoscalepy.functions.RxLogit import rx_logit

## Create a logistic regression model using rx_logit function from


revoscalepy package

logitObj = rx_logit("tipped ~ passenger_count + trip_distance +


trip_time_in_secs + direct_distance", data = InputDataSet);

## Serialize model

trained_model = pickle.dumps(logitObj)

',

@input_data_1 = N'

select tipped, fare_amount, passenger_count, trip_time_in_secs,


trip_distance,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance

from nyctaxi_sample_training

',

@input_data_1_name = N'InputDataSet',

@params = N'@trained_model varbinary(max) OUTPUT',

@trained_model = @trained_model OUTPUT;

END;

GO

This stored procedure performs the following steps as part of model training:

The SELECT query applies the custom scalar function fnCalculateDistance to


calculate the direct distance between the pick-up and drop-off locations. The
results of the query are stored in the default Python input variable,
InputDataset .
The binary variable tipped is used as the label or outcome column, and the
model is fit using these feature columns: passenger_count, trip_distance,
trip_time_in_secs, and direct_distance.
The trained model is serialized and stored in the Python variable logitObj . By
adding the T-SQL keyword OUTPUT, you can add the variable as an output of
the stored procedure. In the next step, that variable is used to insert the
binary code of the model into a database table nyc_taxi_models. This
mechanism makes it easy to store and re-use models.

2. Run the stored procedure as follows to insert the trained revoscalepy model into
the table nyc_taxi_models.

SQL

DECLARE @model VARBINARY(MAX);

EXEC TrainTipPredictionModelRxPy @model OUTPUT;

INSERT INTO nyc_taxi_models (name, model) VALUES('revoscalepy_model',


@model);

Processing of the data and fitting the model might take a while. Messages that
would be piped to Python's stdout stream are displayed in the Messages window
of Management Studio. For example:

text

STDOUT message(s) from external script:

C:\Program Files\Microsoft SQL


Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\lib\site-
packages\revoscalepy

3. Open the table nyc_taxi_models. You can see that one new row has been added,
which contains the serialized model in the column model.

text

revoscalepy_model

0x8003637265766F7363616c....

In the next part of this tutorial, you'll use the trained models to create predictions.

Next steps
In this article, you:

" Created and trained a model using a SQL stored procedure


" Saved the trained model to a SQL table

Python tutorial: Run predictions using Python embedded in a stored procedure


Python tutorial: Run predictions using
Python embedded in a stored procedure
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part five of this five-part tutorial series, you'll learn how to operationalize the models
that you trained and saved in the previous part.

In this scenario, operationalization means deploying the model to production for


scoring. The integration with SQL Server makes this fairly easy, because you can embed
Python code in a stored procedure. To get predictions from the model based on new
inputs, just call the stored procedure from an application and pass the new data.

This part of the tutorial demonstrates two methods for creating predictions based on a
Python model: batch scoring and scoring row by row.

Batch scoring: To provide multiple rows of input data, pass a SELECT query as an
argument to the stored procedure. The result is a table of observations
corresponding to the input cases.
Individual scoring: Pass a set of individual parameter values as input. The stored
procedure returns a single row or value.

All the Python code needed for scoring is provided as part of the stored procedures.

In this article, you'll:

" Create and use stored procedures for batch scoring


" Create and use stored procedures for scoring a single row

In part one, you installed the prerequisites and restored the sample database.

In part two, you explored the sample data and generated some plots.

In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.

In part four, you loaded the modules and called the necessary functions to create and
train the model using a SQL Server stored procedure.

Batch scoring
The first two stored procedures created using the following scripts illustrate the basic
syntax for wrapping a Python prediction call in a stored procedure. Both stored
procedures require a table of data as inputs.

The name of the model to use is provided as input parameter to the stored
procedure. The stored procedure loads the serialized model from the database
table nyc_taxi_models .table, using the SELECT statement in the stored procedure.

The serialized model is stored in the Python variable mod for further processing
using Python.

The new cases that need to be scored are obtained from the Transact-SQL query
specified in @input_data_1 . As the query data is read, the rows are saved in the
default data frame, InputDataSet .

Both stored procedure use functions from sklearn to calculate an accuracy metric,
AUC (area under curve). Accuracy metrics such as AUC can only be generated if
you also provide the target label (the tipped column). Predictions do not need the
target label (variable y ), but the accuracy metric calculation does.

Therefore, if you don't have target labels for the data to be scored, you can modify
the stored procedure to remove the AUC calculations, and return only the tip
probabilities from the features (variable X in the stored procedure).

PredictTipSciKitPy
Run the following T-SQL statements to create the stored procedure PredictTipSciKitPy .
This stored procedure requires a model based on the scikit-learn package, because it
uses functions specific to that package.

The data frame containing inputs is passed to the predict_proba function of the logistic
regression model, mod . The predict_proba function ( probArray = mod.predict_proba(X) )
returns a float that represents the probability that a tip (of any amount) will be given.

SQL

DROP PROCEDURE IF EXISTS PredictTipSciKitPy;

GO

CREATE PROCEDURE [dbo].[PredictTipSciKitPy] (@model varchar(50), @inquery


nvarchar(max))

AS

BEGIN

DECLARE @lmodel2 varbinary(max) = (select model from nyc_taxi_models where


name = @model);

EXEC sp_execute_external_script

@language = N'Python',

@script = N'

import pickle;

import numpy;

from sklearn import metrics

mod = pickle.loads(lmodel2)

X = InputDataSet[["passenger_count", "trip_distance", "trip_time_in_secs",


"direct_distance"]]

y = numpy.ravel(InputDataSet[["tipped"]])

probArray = mod.predict_proba(X)

probList = []

for i in range(len(probArray)):

probList.append((probArray[i])[1])

probArray = numpy.asarray(probList)

fpr, tpr, thresholds = metrics.roc_curve(y, probArray)

aucResult = metrics.auc(fpr, tpr)


print ("AUC on testing data is: " + str(aucResult))

OutputDataSet = pandas.DataFrame(data = probList, columns = ["predictions"])

',

@input_data_1 = @inquery,

@input_data_1_name = N'InputDataSet',

@params = N'@lmodel2 varbinary(max)',

@lmodel2 = @lmodel2

WITH RESULT SETS ((Score float));

END

GO

PredictTipRxPy
Run the following T-SQL statements to create the stored procedure PredictTipRxPy .
This stored procedure uses the same inputs and creates the same type of scores as the
previous stored procedure, but it uses functions from the revoscalepy package provided
with SQL Server machine learning.

SQL

DROP PROCEDURE IF EXISTS PredictTipRxPy;

GO

CREATE PROCEDURE [dbo].[PredictTipRxPy] (@model varchar(50), @inquery


nvarchar(max))

AS

BEGIN

DECLARE @lmodel2 varbinary(max) = (select model from nyc_taxi_models where


name = @model);

EXEC sp_execute_external_script

@language = N'Python',

@script = N'

import pickle;

import numpy;

from sklearn import metrics

from revoscalepy.functions.RxPredict import rx_predict;

mod = pickle.loads(lmodel2)

X = InputDataSet[["passenger_count", "trip_distance", "trip_time_in_secs",


"direct_distance"]]

y = numpy.ravel(InputDataSet[["tipped"]])

probArray = rx_predict(mod, X)

probList = probArray["tipped_Pred"].values

probArray = numpy.asarray(probList)

fpr, tpr, thresholds = metrics.roc_curve(y, probArray)

aucResult = metrics.auc(fpr, tpr)


print ("AUC on testing data is: " + str(aucResult))

OutputDataSet = pandas.DataFrame(data = probList, columns = ["predictions"])

',

@input_data_1 = @inquery,

@input_data_1_name = N'InputDataSet',

@params = N'@lmodel2 varbinary(max)',

@lmodel2 = @lmodel2

WITH RESULT SETS ((Score float));

END

GO

Run batch scoring using a SELECT query


The stored procedures PredictTipSciKitPy and PredictTipRxPy require two input
parameters:

The query that retrieves the data for scoring


The name of a trained model

By passing those arguments to the stored procedure, you can select a particular model
or change the data used for scoring.

1. To use the scikit-learn model for scoring, call the stored procedure
PredictTipSciKitPy, passing the model name and query string as inputs.

SQL

DECLARE @query_string nvarchar(max) -- Specify input query

SET @query_string='

select tipped, fare_amount, passenger_count, trip_time_in_secs,


trip_distance,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance

from nyctaxi_sample_testing'

EXEC [dbo].[PredictTipSciKitPy] 'SciKit_model', @query_string;

The stored procedure returns predicted probabilities for each trip that was passed
in as part of the input query.

If you're using SSMS (SQL Server Management Studio) for running queries, the
probabilities will appear as a table in the Results pane. The Messages pane outputs
the accuracy metric (AUC or area under curve) with a value of around 0.56.

2. To use the revoscalepy model for scoring, call the stored procedure
PredictTipRxPy, passing the model name and query string as inputs.

SQL

DECLARE @query_string nvarchar(max) -- Specify input query

SET @query_string='

select tipped, fare_amount, passenger_count, trip_time_in_secs,


trip_distance,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance

from nyctaxi_sample_testing'

EXEC [dbo].[PredictTipRxPy] 'revoscalepy_model', @query_string;

Single-row scoring
Sometimes, instead of batch scoring, you might want to pass in a single case, getting
values from an application, and returning a single result based on those values. For
example, you could set up an Excel worksheet, web application, or report to call the
stored procedure and pass to it inputs typed or selected by users.

In this section, you'll learn how to create single predictions by calling two stored
procedures:

PredictTipSingleModeSciKitPy is designed for single-row scoring using the scikit-


learn model.
PredictTipSingleModeRxPy is designed for single-row scoring using the
revoscalepy model.
If you haven't trained a model yet, return to part five!

Both models take as input a series of single values, such as passenger count, trip
distance, and so forth. A table-valued function, fnEngineerFeatures , is used to convert
latitude and longitude values from the inputs to a new feature, direct distance. Part four
contains a description of this table-valued function.

Both stored procedures create a score based on the Python model.

7 Note

It's important that you provide all the input features required by the Python model
when you call the stored procedure from an external application. To avoid errors,
you might need to cast or convert the input data to a Python data type, in addition
to validating data type and data length.

PredictTipSingleModeSciKitPy
The following stored procedure PredictTipSingleModeSciKitPy performs scoring using
the scikit-learn model.

SQL

DROP PROCEDURE IF EXISTS PredictTipSingleModeSciKitPy;

GO

CREATE PROCEDURE [dbo].[PredictTipSingleModeSciKitPy] (@model varchar(50),


@passenger_count int = 0,

@trip_distance float = 0,

@trip_time_in_secs int = 0,

@pickup_latitude float = 0,

@pickup_longitude float = 0,

@dropoff_latitude float = 0,

@dropoff_longitude float = 0)

AS

BEGIN

DECLARE @inquery nvarchar(max) = N'

SELECT * FROM [dbo].[fnEngineerFeatures](

@passenger_count,

@trip_distance,

@trip_time_in_secs,

@pickup_latitude,

@pickup_longitude,

@dropoff_latitude,

@dropoff_longitude)

'

DECLARE @lmodel2 varbinary(max) = (select model from nyc_taxi_models where


name = @model);

EXEC sp_execute_external_script

@language = N'Python',

@script = N'

import pickle;

import numpy;

# Load model and unserialize

mod = pickle.loads(model)

# Get features for scoring from input data

X = InputDataSet[["passenger_count", "trip_distance", "trip_time_in_secs",


"direct_distance"]]

# Score data to get tip prediction probability as a list (of float)

probList = []

probList.append((mod.predict_proba(X)[0])[1])

# Create output data frame

OutputDataSet = pandas.DataFrame(data = probList, columns = ["predictions"])

',

@input_data_1 = @inquery,

@params = N'@model varbinary(max),@passenger_count int,@trip_distance


float,

@trip_time_in_secs int ,

@pickup_latitude float ,

@pickup_longitude float ,

@dropoff_latitude float ,

@dropoff_longitude float',

@model = @lmodel2,

@passenger_count =@passenger_count ,

@trip_distance=@trip_distance,

@trip_time_in_secs=@trip_time_in_secs,

@pickup_latitude=@pickup_latitude,

@pickup_longitude=@pickup_longitude,

@dropoff_latitude=@dropoff_latitude,

@dropoff_longitude=@dropoff_longitude

WITH RESULT SETS ((Score float));


END

GO

PredictTipSingleModeRxPy
The following stored procedure PredictTipSingleModeRxPy performs scoring using the
revoscalepy model.

SQL

DROP PROCEDURE IF EXISTS PredictTipSingleModeRxPy;

GO

CREATE PROCEDURE [dbo].[PredictTipSingleModeRxPy] (@model varchar(50),


@passenger_count int = 0,

@trip_distance float = 0,

@trip_time_in_secs int = 0,

@pickup_latitude float = 0,

@pickup_longitude float = 0,

@dropoff_latitude float = 0,

@dropoff_longitude float = 0)

AS

BEGIN

DECLARE @inquery nvarchar(max) = N'

SELECT * FROM [dbo].[fnEngineerFeatures](

@passenger_count,

@trip_distance,

@trip_time_in_secs,

@pickup_latitude,

@pickup_longitude,

@dropoff_latitude,

@dropoff_longitude)

'

DECLARE @lmodel2 varbinary(max) = (select model from nyc_taxi_models where


name = @model);

EXEC sp_execute_external_script

@language = N'Python',

@script = N'

import pickle;

import numpy;

from revoscalepy.functions.RxPredict import rx_predict;

# Load model and unserialize

mod = pickle.loads(model)

# Get features for scoring from input data

X = InputDataSet[["passenger_count", "trip_distance", "trip_time_in_secs",


"direct_distance"]]

# Score data to get tip prediction probability as a list (of float)

probArray = rx_predict(mod, X)

probList = []

probList = probArray["tipped_Pred"].values

# Create output data frame

OutputDataSet = pandas.DataFrame(data = probList, columns = ["predictions"])

',

@input_data_1 = @inquery,

@params = N'@model varbinary(max),@passenger_count int,@trip_distance


float,

@trip_time_in_secs int ,

@pickup_latitude float ,

@pickup_longitude float ,

@dropoff_latitude float ,

@dropoff_longitude float',

@model = @lmodel2,

@passenger_count =@passenger_count ,

@trip_distance=@trip_distance,

@trip_time_in_secs=@trip_time_in_secs,

@pickup_latitude=@pickup_latitude,

@pickup_longitude=@pickup_longitude,

@dropoff_latitude=@dropoff_latitude,

@dropoff_longitude=@dropoff_longitude

WITH RESULT SETS ((Score float));


END

GO

Generate scores from models


After the stored procedures have been created, it's easy to generate a score based on
either model. Open a new Query window and provide parameters for each of the
feature columns.

The seven required values for these feature columns are, in order:

passenger_count
trip_distance
trip_time_in_secs
pickup_latitude
pickup_longitude
dropoff_latitude
dropoff_longitude

For example:

To generate a prediction by using the revoscalepy model, run this statement:

SQL

EXEC [dbo].[PredictTipSingleModeRxPy] 'revoscalepy_model', 1, 2.5, 631,


40.763958,-73.973373, 40.782139,-73.977303

To generate a score by using the scikit-learn model, run this statement:

SQL

EXEC [dbo].[PredictTipSingleModeSciKitPy] 'SciKit_model', 1, 2.5, 631,


40.763958,-73.973373, 40.782139,-73.977303

The output from both procedures is a probability of a tip being paid for the taxi trip with
the specified parameters or features.

Conclusion
In this tutorial series, you've learned how to work with Python code embedded in stored
procedures. The integration with Transact-SQL makes it much easier to deploy Python
models for prediction and to incorporate model retraining as part of an enterprise data
workflow.

Next steps
In this article, you:

" Created and used stored procedures for batch scoring


" Created and used stored procedures for scoring a single row

For more information about Python, see Python extension in SQL Server.
Tutorial: Develop a predictive model in
R with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this four-part tutorial series, you will use R and a machine learning model in Azure
SQL Managed Instance Machine Learning Services to predict the number of ski rentals.

Imagine you own a ski rental business and you want to predict the number of rentals
that you'll have on a future date. This information will help you get your stock, staff, and
facilities ready.

In the first part of this series, you'll get set up with the prerequisites. In parts two and
three, you'll develop some R scripts in a notebook to prepare your data and train a
machine learning model. Then, in part three, you'll run those R scripts inside a database
using T-SQL stored procedures.

In this article, you'll learn how to:

" Restore a sample database

In part two, you'll learn how to load the data from a database into a Python data frame,
and prepare the data in R.

In part three, you'll learn how to train a machine learning model model in R.

In part four, you'll learn how to store the model in a database, and then create stored
procedures from the R scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.

Prerequisites
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.

R IDE - This tutorial uses RStudio Desktop .


RODBC - This driver is used in the R scripts you'll develop in this tutorial. If it's not
already installed, install it using the R command install.packages("RODBC") . For
more information on RODBC, see CRAN - Package RODBC .

SQL query tool - This tutorial assumes you're using Azure Data Studio. For more
information, see How to use notebooks in Azure Data Studio.

Restore the sample database


The sample database used in this tutorial has been saved to a .bak database backup file
for you to download and use.

1. Download the file TutorialDB.bak .

2. Follow the directions in Restore a database to a Managed Instance in SQL Server


Management Studio, using these details:

Import from the TutorialDB.bak file you downloaded


Name the target database "TutorialDB"

3. You can verify that the restored database exists by querying the dbo.rental_data
table:

SQL

USE TutorialDB;

SELECT * FROM [dbo].[rental_data];

Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.

Next steps
In part one of this tutorial series, you completed these steps:

Installed the prerequisites


Restored a sample database

To prepare the data for the machine learning model, follow part two of this tutorial
series:
Prepare data to train a predictive model in R
Tutorial: Prepare data to train a
predictive model in R with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part two of this four-part tutorial series, you'll prepare data from a database using R.
Later in this series, you'll use this data to train and deploy a predictive model in R with
Azure SQL Managed Instance Machine Learning Services.

In this article, you'll learn how to:

" Restore a sample database into a database


" Load the data from the database into an R data frame
" Prepare the data in R by identifying some columns as categorical

In part one, you learned how to restore the sample database.

In part three, you'll learn how to train a machine learning model in R.

In part four, you'll learn how to store the model in a database, and then create stored
procedures from the R scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.

Prerequisites
Part two of this tutorial assumes you have completed part one and its prerequisites.

Load the data into a data frame


To use the data in R, you'll load the data from the database into a data frame
( rentaldata ).

Create a new RScript file in RStudio and run the following script. Replace ServerName
with your own connection information.

#Define the connection string to connect to the TutorialDB database

connStr <- "Driver=SQL


Server;Server=ServerName;Database=TutorialDB;uid=Username;pwd=Password"

#Get the data from the table

library(RODBC)

ch <- odbcDriverConnect(connStr)

#Import the data from the table

rentaldata <- sqlFetch(ch, "dbo.rental_data")

#Take a look at the structure of the data and the top rows

head(rentaldata)

str(rentaldata)

You should see results similar to the following.

results

Year Month Day RentalCount WeekDay Holiday Snow

1 2014 1 20 445 2 1 0

2 2014 2 13 40 5 0 0

3 2013 3 10 456 1 0 0

4 2014 3 31 38 2 0 0

5 2014 4 24 23 5 0 0

6 2015 2 11 42 4 0 0

'data.frame': 453 obs. of 7 variables:

$ Year : int 2014 2014 2013 2014 2014 2015 2013 2014 2013 2015 ...

$ Month : num 1 2 3 3 4 2 4 3 4 3 ...

$ Day : num 20 13 10 31 24 11 28 8 5 29 ...

$ RentalCount: num 445 40 456 38 23 42 310 240 22 360 ...

$ WeekDay : num 2 5 1 2 5 4 1 7 6 1 ...

$ Holiday : int 1 0 0 0 0 0 0 0 0 0 ...

$ Snow : num 0 0 0 0 0 0 0 0 0 0 ...

Prepare the data


In this sample database, most of the preparation has already been done, but you'll do
one more preparation here.
Use the following R script to identify three columns as
categories by changing the data types to factor.

#Changing the three factor columns to factor types

rentaldata$Holiday <- factor(rentaldata$Holiday);

rentaldata$Snow <- factor(rentaldata$Snow);

rentaldata$WeekDay <- factor(rentaldata$WeekDay);

#Visualize the dataset after the change

str(rentaldata);

You should see results similar to the following.

results

data.frame': 453 obs. of 7 variables:

$ Year : int 2014 2014 2013 2014 2014 2015 2013 2014 2013 2015 ...

$ Month : num 1 2 3 3 4 2 4 3 4 3 ...

$ Day : num 20 13 10 31 24 11 28 8 5 29 ...

$ RentalCount: num 445 40 456 38 23 42 310 240 22 360 ...

$ WeekDay : Factor w/ 7 levels "1","2","3","4",..: 2 5 1 2 5 4 1 7 6 1


...

$ Holiday : Factor w/ 2 levels "0","1": 2 1 1 1 1 1 1 1 1 1 ...

$ Snow : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...

The data is now prepared for training.

Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.

Next steps
In part two of this tutorial series, you learned how to:

Load the sample data into an R data frame


Prepare the data in R by identifying some columns as categorical

To create a machine learning model that uses data from the TutorialDB database, follow
part three of this tutorial series:

Create a predictive model in R with SQL machine learning


Tutorial: Create a predictive model in R
with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part three of this four-part tutorial series, you'll train a predictive model in R. In the
next part of this series, you'll deploy this model in an Azure SQL Managed Instance
database with Machine Learning Services.

In this article, you'll learn how to:

" Train two machine learning models


" Make predictions from both models
" Compare the results to choose the most accurate model

In part one, you learned how to restore the sample database.

In part two, you learned how to load the data from a database into a Python data frame
and prepare the data in R.

In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run in on the server to make predictions based on new data.

Prerequisites
Part three of this tutorial series assumes you have fulfilled the prerequisites of part one,
and completed the steps in part two.

Train two models


To find the best model for the ski rental data, create two different models (linear
regression and decision tree) and see which one is predicting more accurately. You'll use
the data frame rentaldata that you created in part one of this series.

#First, split the dataset into two different sets:

# one for training the model and the other for validating it

train_data = rentaldata[rentaldata$Year < 2015,];

test_data = rentaldata[rentaldata$Year == 2015,];

#Use the RentalCount column to check the quality of the prediction against
actual values

actual_counts <- test_data$RentalCount;

#Model 1: Use lm to create a linear regression model, trained with the


training data set

model_lm <- lm(RentalCount ~ Month + Day + WeekDay + Snow + Holiday, data =


train_data);

#Model 2: Use rpart to create a decision tree model, trained with the
training data set

library(rpart);

model_rpart <- rpart(RentalCount ~ Month + Day + WeekDay + Snow + Holiday,


data = train_data);

Make predictions from both models


Use a predict function to predict the rental counts using each trained model.

#Use both models to make predictions using the test data set.

predict_lm <- predict(model_lm, test_data)

predict_lm <- data.frame(RentalCount_Pred = predict_lm, RentalCount =


test_data$RentalCount,

Year = test_data$Year, Month = test_data$Month,

Day = test_data$Day, Weekday = test_data$WeekDay,

Snow = test_data$Snow, Holiday = test_data$Holiday)

predict_rpart <- predict(model_rpart, test_data)

predict_rpart <- data.frame(RentalCount_Pred = predict_rpart, RentalCount =


test_data$RentalCount,

Year = test_data$Year, Month = test_data$Month,

Day = test_data$Day, Weekday = test_data$WeekDay,

Snow = test_data$Snow, Holiday = test_data$Holiday)

#To verify it worked, look at the top rows of the two prediction data sets.

head(predict_lm);

head(predict_rpart);

results

RentalCount_Pred RentalCount Month Day WeekDay Snow Holiday

1 27.45858 42 2 11 4 0 0

2 387.29344 360 3 29 1 0 0

3 16.37349 20 4 22 4 0 0

4 31.07058 42 3 6 6 0 0

5 463.97263 405 2 28 7 1 0

6 102.21695 38 1 12 2 1 0

RentalCount_Pred RentalCount Month Day WeekDay Snow Holiday

1 40.0000 42 2 11 4 0 0

2 332.5714 360 3 29 1 0 0

3 27.7500 20 4 22 4 0 0

4 34.2500 42 3 6 6 0 0

5 645.7059 405 2 28 7 1 0

6 40.0000 38 1 12 2 1 0

Compare the results


Now you want to see which of the models gives the best predictions. A quick and easy
way to do this is to use a basic plotting function to view the difference between the
actual values in your training data and the predicted values.

#Use the plotting functionality in R to visualize the results from the


predictions

par(mfrow = c(1, 1));

plot(predict_lm$RentalCount_Pred - predict_lm$RentalCount, main =


"Difference between actual and predicted. lm")

plot(predict_rpart$RentalCount_Pred - predict_rpart$RentalCount, main =


"Difference between actual and predicted. rpart")

It looks like the decision tree model is the more accurate of the two models.

Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.

Next steps
In part three of this tutorial series, you learned how to:

Train two machine learning models


Make predictions from both models
Compare the results to choose the most accurate model

To deploy the machine learning model you've created, follow part four of this tutorial
series:
Deploy a predictive model in R with SQL machine learning
Tutorial: Deploy a predictive model in R
with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part four of this four-part tutorial series, you'll deploy a machine learning model
developed in R into Azure SQL Managed Instance using Machine Learning Services.

In this article, you'll learn how to:

" Create a stored procedure that generates the machine learning model


" Store the model in a database table
" Create a stored procedure that makes predictions using the model
" Execute the model with new data

In part one, you learned how to restore the sample database.

In part two, you learned how to import a sample database and then prepare the data to
be used for training a predictive model in R.

In part three, you learned how to create and train multiple machine learning models in
R, and then choose the most accurate one.

Prerequisites
Part four of this tutorial assumes you fulfilled the prerequisites of part one and
completed the steps in part two and part three.

Create a stored procedure that generates the


model
In part three of this tutorial series, you decided that a decision tree (dtree) model was
the most accurate. Now, using the R scripts you developed, create a stored procedure
( generate_rental_model ) that trains and generates the dtree model using rpart from the
R package.

Run the following commands in Azure Data Studio.

SQL
USE [TutorialDB]

DROP PROCEDURE IF EXISTS generate_rental_model;

GO

CREATE PROCEDURE generate_rental_model (@trained_model VARBINARY(max)


OUTPUT)

AS

BEGIN

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

rental_train_data$Month <- factor(rental_train_data$Month);

rental_train_data$Day <- factor(rental_train_data$Day);

rental_train_data$Holiday <- factor(rental_train_data$Holiday);

rental_train_data$Snow <- factor(rental_train_data$Snow);

rental_train_data$WeekDay <- factor(rental_train_data$WeekDay);

#Create a dtree model and train it using the training data set

library(rpart);

model_dtree <- rpart(RentalCount ~ Month + Day + WeekDay + Snow + Holiday,


data = rental_train_data);

#Serialize the model before saving it to the database table

trained_model <- as.raw(serialize(model_dtree, connection=NULL));

'

, @input_data_1 = N'

SELECT RentalCount

, Year

, Month

, Day

, WeekDay

, Snow

, Holiday

FROM dbo.rental_data

WHERE Year < 2015

'

, @input_data_1_name = N'rental_train_data'

, @params = N'@trained_model varbinary(max) OUTPUT'

, @trained_model = @trained_model OUTPUT;

END;

GO

Store the model in a database table


Create a table in the TutorialDB database and then save the model to the table.

1. Create a table ( rental_models ) for storing the model.

SQL

USE TutorialDB;

DROP TABLE IF EXISTS rental_models;

GO

CREATE TABLE rental_models (

model_name VARCHAR(30) NOT NULL DEFAULT('default model') PRIMARY


KEY

, model VARBINARY(MAX) NOT NULL

);

GO

2. Save the model to the table as a binary object, with the model name "DTree".

SQL

-- Save model to table

TRUNCATE TABLE rental_models;

DECLARE @model VARBINARY(MAX);

EXECUTE generate_rental_model @model OUTPUT;

INSERT INTO rental_models (

model_name

, model

VALUES (

'DTree'

, @model

);

SELECT *

FROM rental_models;

Create a stored procedure that makes


predictions
Create a stored procedure ( predict_rentalcount_new ) that makes predictions using the
trained model and a set of new data.

SQL

-- Stored procedure that takes model name and new data as input parameters
and predicts the rental count for the new data

USE [TutorialDB]

DROP PROCEDURE IF EXISTS predict_rentalcount_new;

GO

CREATE PROCEDURE predict_rentalcount_new (

@model_name VARCHAR(100)

, @input_query NVARCHAR(MAX)

AS

BEGIN

DECLARE @model VARBINARY(MAX) = (

SELECT model

FROM rental_models

WHERE model_name = @model_name

);

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

#Convert types to factors

rentals$Month <- factor(rentals$Month);

rentals$Day <- factor(rentals$Day);

rentals$Holiday <- factor(rentals$Holiday);

rentals$Snow <- factor(rentals$Snow);

rentals$WeekDay <- factor(rentals$WeekDay);

#Before using the model to predict, we need to unserialize it

rental_model <- unserialize(model);

#Call prediction function

rental_predictions <- predict(rental_model, rentals);

rental_predictions <- data.frame(rental_predictions);

'

, @input_data_1 = @input_query

, @input_data_1_name = N'rentals'

, @output_data_1_name = N'rental_predictions'

, @params = N'@model varbinary(max)'

, @model = @model

WITH RESULT SETS(("RentalCount_Predicted" FLOAT));

END;

GO

Execute the model with new data


Now you can use the stored procedure predict_rentalcount_new to predict the rental
count from new data.

SQL

-- Use the predict_rentalcount_new stored procedure with the model name and
a set of features to predict the rental count

EXECUTE dbo.predict_rentalcount_new @model_name = 'DTree'

, @input_query = '

SELECT CONVERT(INT, 3) AS Month

, CONVERT(INT, 24) AS Day

, CONVERT(INT, 4) AS WeekDay

, CONVERT(INT, 1) AS Snow

, CONVERT(INT, 1) AS Holiday

';

GO

You should see a result similar to the following.

results

RentalCount_Predicted

332.571428571429

You have successfully created, trained, and deployed a model in a database. You then
used that model in a stored procedure to predict values based on new data.

Clean up resources
When you've finished using the TutorialDB database, delete it from your server.

Next steps
In part four of this tutorial series, you learned how to:

Create a stored procedure that generates the machine learning model


Store the model in a database table
Create a stored procedure that makes predictions using the model
Execute the model with new data

To learn more about using R in Machine Learning Services, see:

Run simple R scripts


R data structures, types and objects
R functions
Tutorial: Develop a clustering model in R
with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this four-part tutorial series, you'll use R to develop and deploy a K-Means clustering
model in Azure SQL Managed Instance Machine Learning Services to cluster customer
data.

In part one of this series, you'll set up the prerequisites for the tutorial and then restore
a sample dataset to a database.
In parts two and three, you'll develop some R scripts in
an Azure Data Studio notebook to analyze and prepare this sample data and train a
machine learning model. Then, in part four, you'll run those R scripts inside a database
using stored procedures.

Clustering can be explained as organizing data into groups where members of a group
are similar in some way. For this tutorial series, imagine you own a retail business. You'll
use the K-Means algorithm to perform the clustering of customers in a dataset of
product purchases and returns. By clustering customers, you can focus your marketing
efforts more effectively by targeting specific groups. K-Means clustering is an
unsupervised learning algorithm that looks for patterns in data based on similarities.

In this article, you'll learn how to:

" Restore a sample database

In part two, you'll learn how to prepare the data from a database to perform clustering.

In part three, you'll learn how to create and train a K-Means clustering model in R.

In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in R based on new data.

Prerequisites
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.

SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
Azure Data Studio. You'll use a notebook in Azure Data Studio for SQL. For more
information about notebooks, see How to use notebooks in Azure Data Studio.

R IDE - This tutorial uses RStudio Desktop .

RODBC - This driver is used in the R scripts you'll develop in this tutorial. If it's not
already installed, install it using the R command install.packages("RODBC") . For
more information on RODBC, see CRAN - Package RODBC .

Restore the sample database


The sample dataset used in this tutorial has been saved to a .bak database backup file
for you to download and use. This dataset is derived from the tpcx-bb dataset
provided by the Transaction Processing Performance Council (TPC) .

1. Download the file tpcxbb_1gb.bak .

2. Follow the directions in Restore a database to a Managed Instance in SQL Server


Management Studio, using these details:

Import from the tpcxbb_1gb.bak file you downloaded


Name the target database "tpcxbb_1gb"

3. You can verify that the dataset exists after you have restored the database by
querying the dbo.customer table:

SQL

USE tpcxbb_1gb;

SELECT * FROM [dbo].[customer];

Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.

Next steps
In part one of this tutorial series, you completed these steps:

Installed the prerequisites


Restored a sample database
To prepare the data for the machine learning model, follow part two of this tutorial
series:

Prepare data to perform clustering


Tutorial: Prepare data to perform
clustering in R with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part two of this four-part tutorial series, you'll prepare the data from a database to
perform clustering in R with Azure SQL Managed Instance Machine Learning Services.

In this article, you'll learn how to:

" Separate customers along different dimensions using R


" Load the data from the database into an R data frame

In part one, you installed the prerequisites and restored the sample database.

In part three, you'll learn how to create and train a K-Means clustering model in R.

In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in R based on new data.

Prerequisites
Part two of this tutorial assumes you have completed part one.

Separate customers
Create a new RScript file in RStudio and run the following script.
In the SQL query, you're
separating customers along the following dimensions:

orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency

In the connStr function, replace ServerName with your own connection information.
R

# Define the connection string to connect to the tpcxbb_1gb database

connStr <- "Driver=SQL


Server;Server=ServerName;Database=tpcxbb_1gb;uid=Username;pwd=Password"

#Define the query to select data

input_query <- "

SELECT ss_customer_sk AS customer


,round(CASE

WHEN (

(orders_count = 0)

OR (returns_count IS NULL)

OR (orders_count IS NULL)

OR ((returns_count / orders_count) IS NULL)

THEN 0.0

ELSE (cast(returns_count AS NCHAR(10)) / orders_count)

END, 7) AS orderRatio
,round(CASE

WHEN (

(orders_items = 0)

OR (returns_items IS NULL)

OR (orders_items IS NULL)

OR ((returns_items / orders_items) IS NULL)

THEN 0.0

ELSE (cast(returns_items AS NCHAR(10)) / orders_items)

END, 7) AS itemsRatio
,round(CASE

WHEN (

(orders_money = 0)

OR (returns_money IS NULL)

OR (orders_money IS NULL)

OR ((returns_money / orders_money) IS NULL)

THEN 0.0

ELSE (cast(returns_money AS NCHAR(10)) / orders_money)

END, 7) AS monetaryRatio

,round(CASE

WHEN (returns_count IS NULL)

THEN 0.0

ELSE returns_count

END, 0) AS frequency

FROM (

SELECT ss_customer_sk,

-- return order ratio

COUNT(DISTINCT (ss_ticket_number)) AS orders_count,

-- return ss_item_sk ratio

COUNT(ss_item_sk) AS orders_items,

-- return monetary amount ratio

SUM(ss_net_paid) AS orders_money

FROM store_sales s

GROUP BY ss_customer_sk

) orders

LEFT OUTER JOIN (

SELECT sr_customer_sk,

-- return order ratio

count(DISTINCT (sr_ticket_number)) AS returns_count,

-- return ss_item_sk ratio

COUNT(sr_item_sk) AS returns_items,

-- return monetary amount ratio

SUM(sr_return_amt) AS returns_money

FROM store_returns

GROUP BY sr_customer_sk

) returned ON ss_customer_sk = sr_customer_sk";

Load the data into a data frame


Now use the following script to return the results from the query to an R data frame.

# Query using input_query and get the results back

# to data frame customer_data

library(RODBC)

ch <- odbcDriverConnect(connStr)

customer_data <- sqlQuery(ch, input_query)

# Take a look at the data just loaded

head(customer_data, n = 5);

You should see results similar to the following.

results

customer orderRatio itemsRatio monetaryRatio frequency

1 29727 0 0 0.000000 0

2 26429 0 0 0.041979 1

3 60053 0 0 0.065762 3

4 97643 0 0 0.037034 3

5 32549 0 0 0.031281 4

Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part two of this tutorial series, you learned how to:

Separate customers along different dimensions using R


Load the data from the database into an R data frame

To create a machine learning model that uses this customer data, follow part three of
this tutorial series:

Create a predictive model in R with SQL machine learning


Tutorial: Build a clustering model in R
with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part three of this four-part tutorial series, you'll build a K-Means model in R to
perform clustering. In the next part of this series, you'll deploy this model in a database
with Azure SQL Managed Instance Machine Learning Services.

In this article, you'll learn how to:

" Define the number of clusters for a K-Means algorithm


" Perform clustering
" Analyze the results

In part one, you installed the prerequisites and restored the sample database.

In part two, you learned how to prepare the data from a database to perform clustering.

In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in R based on new data.

Prerequisites
Part three of this tutorial series assumes you have fulfilled the prerequisites of part
one and completed the steps in part two.

Define the number of clusters


To cluster your customer data, you'll use the K-Means clustering algorithm, one of the
simplest and most well-known ways of grouping data.
You can read more about K-
Means in A complete guide to K-means clustering algorithm .

The algorithm accepts two inputs: The data itself, and a predefined number "k"
representing the number of clusters to generate.
The output is k clusters with the input
data partitioned among the clusters.

To determine the number of clusters for the algorithm to use, use a plot of the within
groups sum of squares, by number of clusters extracted. The appropriate number of
clusters to use is at the bend or "elbow" of the plot.
R

# Determine number of clusters by using a plot of the within groups sum of


squares,

# by number of clusters extracted.

wss <- (nrow(customer_data) - 1) * sum(apply(customer_data, 2, var))

for (i in 2:20)

wss[i] <- sum(kmeans(customer_data, centers = i)$withinss)

plot(1:20, wss, type = "b", xlab = "Number of Clusters", ylab = "Within


groups sum of squares")

Based on the graph, it looks like k = 4 would be a good value to try. That k value will
group the customers into four clusters.

Perform clustering
In the following R script, you'll use the function kmeans to perform clustering.

# Output table to hold the customer group mappings.

# Generate clusters using Kmeans and output key / cluster to a table

# called return_cluster

## create clustering model

clust <- kmeans(customer_data[,2:5],4)

## create clustering ouput for table

customer_cluster <-
data.frame(cluster=clust$cluster,customer=customer_data$customer,orderRatio=
customer_data$orderRatio,

itemsRatio=customer_data$itemsRatio,monetaryRatio=customer_data$monetaryRati
o,frequency=customer_data$frequency)

## write cluster output to DB table

sqlSave(ch, customer_cluster, tablename = "return_cluster")

# Read the customer returns cluster table from the database

customer_cluster_check <- sqlFetch(ch, "return_cluster")

head(customer_cluster_check)

Analyze the results


Now that you've done the clustering using K-Means, the next step is to analyze the
result and see if you can find any actionable information.

#Look at the clustering details to analyze results

clust[-1]

results

$centers

orderRatio itemsRatio monetaryRatio frequency

1 0.621835791 0.1701519 0.35510836 1.009025

2 0.074074074 0.0000000 0.05886575 2.363248

3 0.004807692 0.0000000 0.04618708 5.050481

4 0.000000000 0.0000000 0.00000000 0.000000

$totss

[1] 40191.83

$withinss

[1] 19867.791 215.714 660.784 0.000

$tot.withinss

[1] 20744.29

$betweenss

[1] 19447.54

$size

[1] 4543 702 416 31675

$iter

[1] 3

$ifault

[1] 0

The four cluster means are given using the variables defined in part two:

orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency

Data mining using K-Means often requires further analysis of the results, and further
steps to better understand each cluster, but it can provide some good leads.
Here are a
couple ways you could interpret these results:

Cluster 1 (the largest cluster) seems to be a group of customers that are not active
(all values are zero).
Cluster 3 seems to be a group that stands out in terms of return behavior.

Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.

Next steps
In part three of this tutorial series, you learned how to:

Define the number of clusters for a K-Means algorithm


Perform clustering
Analyze the results

To deploy the machine learning model you've created, follow part four of this tutorial
series:

Deploy a clustering model in R with SQL machine learning


Tutorial: Deploy a clustering model in R
with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part four of this four-part tutorial series, you'll deploy a clustering model, developed
in R, into a database using Azure SQL Managed Instance Machine Learning Services.

In order to perform clustering on a regular basis, as new customers are registering, you
need to be able call the R script from any app. To do that, you can deploy the R script in
a database by putting the R script inside a SQL stored procedure. Because your model
executes in the database, it can easily be trained against data stored in the database.

In this article, you'll learn how to:

" Create a stored procedure that generates the model


" Perform clustering
" Use the clustering information

In part one, you installed the prerequisites and restored the sample database.

In part two, you learned how to prepare the data from a database to perform clustering.

In part three, you learned how to create and train a K-Means clustering model in R.

Prerequisites
Part four of this tutorial series assumes you have fulfilled the prerequisites of part
one and completed the steps in part two and part three.

Create a stored procedure that generates the


model
Run the following T-SQL script to create the stored procedure. The procedure recreates
the steps you developed in parts two and three of this tutorial series:

classify customers based on their purchase and return history


generate four clusters of customers using a K-Means algorithm
The procedure stores the resulting customer cluster mappings in the database table
customer_return_clusters.

SQL

USE [tpcxbb_1gb]

DROP PROC IF EXISTS generate_customer_return_clusters;

GO

CREATE procedure [dbo].[generate_customer_return_clusters]

AS

/*

This procedure uses R to classify customers into different groups

based on their purchase & return history.

*/

BEGIN

DECLARE @duration FLOAT

, @instance_name NVARCHAR(100) = @@SERVERNAME

, @database_name NVARCHAR(128) = db_name()

-- Input query to generate the purchase history & return metrics

, @input_query NVARCHAR(MAX) = N'

SELECT ss_customer_sk AS customer,

round(CASE

WHEN (

(orders_count = 0)

OR (returns_count IS NULL)

OR (orders_count IS NULL)

OR ((returns_count / orders_count) IS NULL)

THEN 0.0

ELSE (cast(returns_count AS NCHAR(10)) / orders_count)

END, 7) AS orderRatio,

round(CASE

WHEN (

(orders_items = 0)

OR (returns_items IS NULL)

OR (orders_items IS NULL)

OR ((returns_items / orders_items) IS NULL)

THEN 0.0

ELSE (cast(returns_items AS NCHAR(10)) / orders_items)

END, 7) AS itemsRatio,

round(CASE

WHEN (

(orders_money = 0)

OR (returns_money IS NULL)

OR (orders_money IS NULL)

OR ((returns_money / orders_money) IS NULL)

THEN 0.0

ELSE (cast(returns_money AS NCHAR(10)) / orders_money)

END, 7) AS monetaryRatio,

round(CASE

WHEN (returns_count IS NULL)

THEN 0.0

ELSE returns_count

END, 0) AS frequency

FROM (

SELECT ss_customer_sk,

-- return order ratio

COUNT(DISTINCT (ss_ticket_number)) AS orders_count,

-- return ss_item_sk ratio

COUNT(ss_item_sk) AS orders_items,

-- return monetary amount ratio

SUM(ss_net_paid) AS orders_money

FROM store_sales s

GROUP BY ss_customer_sk

) orders

LEFT OUTER JOIN (

SELECT sr_customer_sk,

-- return order ratio

count(DISTINCT (sr_ticket_number)) AS returns_count,

-- return ss_item_sk ratio

COUNT(sr_item_sk) AS returns_items,

-- return monetary amount ratio

SUM(sr_return_amt) AS returns_money

FROM store_returns

GROUP BY sr_customer_sk

) returned ON ss_customer_sk = sr_customer_sk

'

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

# Define the connection string

connStr <- paste("Driver=SQL Server; Server=", instance_name,

"; Database=", database_name,

"; uid=Username;pwd=Password; ",

sep="" )

# Input customer data that needs to be classified.

# This is the result we get from the query.

library(RODBC)

ch <- odbcDriverConnect(connStr);

customer_data <- sqlQuery(ch, input_query)

sqlDrop(ch, "customer_return_clusters")

## create clustering model

clust <- kmeans(customer_data[,2:5],4)

## create clustering output for table

customer_cluster <-
data.frame(cluster=clust$cluster,customer=customer_data$customer,orderRatio=
customer_data$orderRatio,


itemsRatio=customer_data$itemsRatio,monetaryRatio=customer_data$monetaryRati
o,frequency=customer_data$frequency)

## write cluster output to DB table

sqlSave(ch, customer_cluster, tablename = "customer_return_clusters")

## clean up

odbcClose(ch)

'

, @input_data_1 = N''

, @params = N'@instance_name nvarchar(100), @database_name


nvarchar(128), @input_query nvarchar(max), @duration float OUTPUT'

, @instance_name = @instance_name

, @database_name = @database_name

, @input_query = @input_query
, @duration = @duration OUTPUT;

END;

GO

Perform clustering
Now that you've created the stored procedure, execute the following script to perform
clustering.

SQL

--Empty table of the results before running the stored procedure

TRUNCATE TABLE customer_return_clusters;

--Execute the clustering

--This will load the table customer_return_clusters with cluster mappings

EXECUTE [dbo].[generate_customer_return_clusters];

Verify that it works and that we actually have the list of customers and their cluster
mappings.

SQL

--Select data from table customer_return_clusters

--to verify that the clustering data was loaded

SELECT TOP (5) *

FROM customer_return_clusters;

result

cluster customer orderRatio itemsRatio monetaryRatio frequency

1 29727 0 0 0 0

4 26429 0 0 0.041979 1

2 60053 0 0 0.065762 3

2 97643 0 0 0.037034 3

2 32549 0 0 0.031281 4

Use the clustering information


Because you stored the clustering procedure in the database, it can perform clustering
efficiently against customer data stored in the same database. You can execute the
procedure whenever your customer data is updated and use the updated clustering
information.

Suppose you want to send a promotional email to customers in cluster 0, the group that
was inactive (you can see how the four clusters were described in part three of this
tutorial). The following code selects the email addresses of customers in cluster 0.

SQL

USE [tpcxbb_1gb]

--Get email addresses of customers in cluster 0 for a promotion campaign

SELECT customer.[c_email_address], customer.c_customer_sk

FROM dbo.customer

JOIN

[dbo].[customer_clusters] as c

ON c.Customer = customer.c_customer_sk

WHERE c.cluster = 0

You can change the c.cluster value to return email addresses for customers in other
clusters.

Clean up resources
When you're finished with this tutorial, you can delete the tpcxbb_1gb database.

Next steps
In part four of this tutorial series, you learned how to:

Create a stored procedure that generates the model


Perform clustering with SQL machine learning
Use the clustering information

To learn more about using R in Machine Learning Services, see:

Run simple R scripts


R data structures, types and objects
R functions
R tutorial: Predict NYC taxi fares with
binary classification
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In this five-part tutorial series for SQL programmers, you'll learn about R integration in
Machine Learning Services in Azure SQL Managed Instance.

You'll build and deploy an R-based machine learning solution using a sample database
on SQL Server. You'll use T-SQL, Azure Data Studio or SQL Server Management Studio,
and a database engine instance with SQL machine learning and R language support

This tutorial series introduces you to R functions used in a data modeling workflow.
Parts include data exploration, building and training a binary classification model, and
model deployment. You'll use sample data from the New York City Taxi and Limousine
Commission. The model you'll build predicts whether a trip is likely to result in a tip
based on the time of day, distance traveled, and pick-up location.

In the first part of this series, you'll install the prerequisites and restore the sample
database. In parts two and three, you'll develop some R scripts to prepare your data and
train a machine learning model. Then, in parts four and five, you'll run those R scripts
inside the database using T-SQL stored procedures.

In this article, you'll:

" Install prerequisites
" Restore the sample database

In part two, you'll explore the sample data and generate some plots.

In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.

In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

7 Note
This tutorial is available in both R and Python. For the Python version, see Python
tutorial: Predict NYC taxi fares with binary classification.

Prerequisites
Install R libraries

Grant permissions to execute Python scripts

Restore the NYC Taxi demo database

All tasks can be done using Transact-SQL stored procedures in Azure Data Studio or
Management Studio.

This tutorial assumes familiarity with basic database operations such as creating
databases and tables, importing data, and writing SQL queries. It does not assume you
know R and all R code is provided.

Background for SQL developers


The process of building a machine learning solution is a complex one that can involve
multiple tools, and the coordination of subject matter experts across several phases:

obtaining and cleaning data


exploring the data and building features useful for modeling
training and tuning the model
deployment to production

Development and testing of the actual code is best performed using a dedicated R
development environment. However, after the script is fully tested, you can easily deploy
it to SQL Server using Transact-SQL stored procedures in the familiar environment of
Azure Data Studio or Management Studio. Wrapping external code in stored procedures
is the primary mechanism for operationalizing code in SQL Server.

After the model has been saved to the database, you can call the model for predictions
from Transact-SQL by using stored procedures.

Whether you're a SQL programmer new to R, or an R developer new to SQL, this five-
part tutorial series introduces a typical workflow for conducting in-database analytics
with R and SQL Server.
Next steps
In this article, you:

" Installed prerequisites
" Restored the sample database

R tutorial: Explore and visualize data


R tutorial: Explore and visualize data
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part two of this five-part tutorial series, you'll explore the sample data and generate
some plots. Later, you'll learn how to serialize graphics objects in Python, and then
deserialize those objects and make plots.

In part two of this five-part tutorial series, you'll review the sample data and then
generate some plots using the generic barplot and hist functions in base R.

A key objective of this article is showing how to call R functions from Transact-SQL in
stored procedures and save the results in application file formats:

Create a stored procedure using barplot to generate an R plot as varbinary data.


Use bcp to export the binary stream to an image file.
Create a stored procedure using hist to generate a plot, saving results as JPG and
PDF output.

7 Note

Because visualization is such a powerful tool for understanding data shape and
distribution, R provides a range of functions and packages for generating
histograms, scatter plots, box plots, and other data exploration graphs. R typically
creates images using an R device for graphical output, which you can capture and
store as a varbinary data type for rendering in application. You can also save the
images to any of the support file formats (.JPG, .PDF, etc.).

In this article, you'll:

" Review the sample data


" Create plots using R in T-SQL
" Output plots in multiple file formats

In part one, you installed the prerequisites and restored the sample database.

In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

Review the data


Developing a data science solution usually includes intensive data exploration and data
visualization. So first take a minute to review the sample data, if you haven't already.

In the original public dataset, the taxi identifiers and trip records were provided in
separate files. However, to make the sample data easier to use, the two original datasets
have been joined on the columns medallion, hack_license, and pickup_datetime. The
records were also sampled to get just 1% of the original number of records. The
resulting down-sampled dataset has 1,703,957 rows and 23 columns.

Taxi identifiers

The medallion column represents the taxi's unique ID number.

The hack_license column contains the taxi driver's license number (anonymized).

Trip and fare records

Each trip record includes the pickup and drop-off location and time, and the trip
distance.

Each fare record includes payment information such as the payment type, total
amount of payment, and the tip amount.

The last three columns can be used for various machine learning tasks. The
tip_amount column contains continuous numeric values and can be used as the
label column for regression analysis. The tipped column has only yes/no values and
is used for binary classification. The tip_class column has multiple class labels and
therefore can be used as the label for multi-class classification tasks.

This walkthrough demonstrates only the binary classification task; you are welcome
to try building models for the other two machine learning tasks, regression and
multiclass classification.

The values used for the label columns are all based on the tip_amount column,
using these business rules:
Derived column name Rule

tipped If tip_amount > 0, tipped = 1, otherwise tipped = 0

tip_class Class 0: tip_amount = $0

Class 1: tip_amount > $0 and tip_amount <= $5

Class 2: tip_amount > $5 and tip_amount <= $10

Class 3: tip_amount > $10 and tip_amount <= $20

Class 4: tip_amount > $20

Create plots using R in T-SQL


To create the plot, use the R function barplot . This step plots a histogram based on
data from a Transact-SQL query. You can wrap this function in a stored procedure,
RPlotHistogram.

1. In SQL Server Management Studio, in Object Explorer, right-click the


NYCTaxi_Sample database and select New Query. Or, in Azure Data Studio, select
New Notebook from the File menu and connect to the database.

2. Paste in the following script to create a stored procedure that plots the histogram.
This example is named RPlotHistogram.

SQL

CREATE PROCEDURE [dbo].[RPlotHistogram]

AS

BEGIN

SET NOCOUNT ON;

DECLARE @query nvarchar(max) =

N'SELECT tipped FROM [dbo].[nyctaxi_sample]'

EXECUTE sp_execute_external_script @language = N'R',


@script = N'

image_file = tempfile();

jpeg(filename = image_file);
#Plot histogram

barplot(table(InputDataSet$tipped), main = "Tip Histogram",


col="lightgreen", xlab="Tipped or not", ylab = "Counts", space=0)

dev.off();

OutputDataSet <- data.frame(data=readBin(file(image_file, "rb"),


what=raw(), n=1e6));

',

@input_data_1 = @query

WITH RESULT SETS ((plot varbinary(max)));

END

GO

Key points to understand in this script include the following:

The variable @query defines the query text ( 'SELECT tipped FROM nyctaxi_sample' ),
which is passed to the R script as the argument to the script input variable,
@input_data_1 . For R scripts that run as external processes, you should have a one-
to-one mapping between inputs to your script, and inputs to the
sp_execute_external_script system stored procedure that starts the R session on
SQL Server.

Within the R script, a variable ( image_file ) is defined to store the image.

The barplot function is called to generate the plot.

The R device is set to off because you are running this command as an external
script in SQL Server. Typically in R, when you issue a high-level plotting command,
R opens a graphics window, called a device. You can turn the device off if you are
writing to a file or handling the output some other way.

The R graphics object is serialized to an R data.frame for output.

Execute the stored procedure and use bcp to export


binary data to an image file
The stored procedure returns the image as a stream of varbinary data, which obviously
you cannot view directly. However, you can use the bcp utility to get the varbinary data
and save it as an image file on a client computer.

1. In Management Studio, run the following statement:

SQL

EXEC [dbo].[RPlotHistogram]

Results

plot
0xFFD8FFE000104A4649...

2. Open a PowerShell command prompt and run the following command, providing
the appropriate instance name, database name, username, and credentials as
arguments. For those using Windows identities, you can replace -U and -P with -T.
PowerShell

bcp "exec RPlotHistogram" queryout "plot.jpg" -S <SQL Server instance


name> -d NYCTaxi_Sample -U <user name> -P <password> -T

7 Note

Command switches for bcp are case-sensitive.

3. If the connection is successful, you will be prompted to enter more information


about the graphic file format.

Press ENTER at each prompt to accept the defaults, except for these changes:

For prefix-length of field plot, type 0.

Type Y if you want to save the output parameters for later reuse.

text

Enter the file storage type of field plot [varbinary(max)]:

Enter prefix-length of field plot [8]: 0

Enter length of field plot [0]:

Enter field terminator [none]:

Do you want to save this format information in a file? [Y/n]

Host filename [bcp.fmt]:

Results

text

Starting copy...

1 rows copied.

Network packet size (bytes): 4096


Clock Time (ms.) Total : 3922 Average : (0.25 rows per sec.)

 Tip

If you save the format information to file (bcp.fmt), the bcp utility generates a
format definition that you can apply to similar commands in future without
being prompted for graphic file format options. To use the format file, add -f
bcp.fmt to the end of any command line, after the password argument.
4. The output file will be created in the same directory where you ran the PowerShell
command. To view the plot, just open the file plot.jpg.

Create a stored procedure using hist


Typically, data scientists generate multiple data visualizations to get insights into the
data from different perspectives. In this example, you will create a stored procedure
called RPlotHist to write histograms, scatterplots, and other R graphics to .JPG and .PDF
format.

This stored procedure uses the hist function to create the histogram, exporting the
binary data to popular formats such as .JPG, .PDF, and .PNG.

1. In SQL Server Management Studio, in Object Explorer, right-click the


NYCTaxi_Sample database and select New Query.

2. Paste in the following script to create a stored procedure that plots the histogram.
This example is named RPlotHist .

SQL

CREATE PROCEDURE [dbo].[RPlotHist]

AS

BEGIN

SET NOCOUNT ON;

DECLARE @query nvarchar(max) =

N'SELECT cast(tipped as int) as tipped, tip_amount, fare_amount FROM


[dbo].[nyctaxi_sample]'

EXECUTE sp_execute_external_script @language = N'R',


@script = N'

# Set output directory for files and check for existing files with
same names

mainDir <- ''C:\\temp\\plots''

dir.create(mainDir, recursive = TRUE, showWarnings = FALSE)

setwd(mainDir);

print("Creating output plot files:", quote=FALSE)

# Open a jpeg file and output histogram of tipped variable in that


file.

dest_filename = tempfile(pattern = ''rHistogram_Tipped_'', tmpdir =


mainDir)

dest_filename = paste(dest_filename, ''.jpg'',sep="")

print(dest_filename, quote=FALSE);

jpeg(filename=dest_filename);

hist(InputDataSet$tipped, col = ''lightgreen'', xlab=''Tipped'',

ylab = ''Counts'', main = ''Histogram, Tipped'');

dev.off();

# Open a pdf file and output histograms of tip amount and fare
amount.

# Outputs two plots in one row

dest_filename = tempfile(pattern =
''rHistograms_Tip_and_Fare_Amount_'', tmpdir = mainDir)

dest_filename = paste(dest_filename, ''.pdf'',sep="")

print(dest_filename, quote=FALSE);

pdf(file=dest_filename, height=4, width=7);

par(mfrow=c(1,2));

hist(InputDataSet$tip_amount, col = ''lightgreen'',

xlab=''Tip amount ($)'',

ylab = ''Counts'',

main = ''Histogram, Tip amount'', xlim = c(0,40), 100);

hist(InputDataSet$fare_amount, col = ''lightgreen'',

xlab=''Fare amount ($)'',

ylab = ''Counts'',

main = ''Histogram,

Fare amount'',

xlim = c(0,100), 100);

dev.off();

# Open a pdf file and output an xyplot of tip amount vs. fare
amount using lattice;

# Only 10,000 sampled observations are plotted here, otherwise file


is large.

dest_filename = tempfile(pattern =
''rXYPlots_Tip_vs_Fare_Amount_'', tmpdir = mainDir)

dest_filename = paste(dest_filename, ''.pdf'',sep="")

print(dest_filename, quote=FALSE);

pdf(file=dest_filename, height=4, width=4);

plot(tip_amount ~ fare_amount,

data = InputDataSet[sample(nrow(InputDataSet), 10000), ],

ylim = c(0,50),

xlim = c(0,150),

cex=.5,

pch=19,

col=''darkgreen'',

main = ''Tip amount by Fare amount'',

xlab=''Fare Amount ($)'',

ylab = ''Tip Amount ($)'');

dev.off();',

@input_data_1 = @query

END

Key points to understand in this script include the following:

The output of the SELECT query within the stored procedure is stored in the
default R data frame, InputDataSet . Various R plotting functions can then be called
to generate the actual graphics files. Most of the embedded R script represents
options for these graphics functions, such as plot or hist .

The R device is set to off because you are running this command as an external
script in SQL Server. Typically in R, when you issue a high-level plotting command,
R opens a graphics window, called a device. You can turn the device off if you are
writing to a file or handling the output some other way.

All files are saved to the local folder C:\temp\Plots. The destination folder is
defined by the arguments provided to the R script as part of the stored procedure.
To output the files to a different folder, change the value of the mainDir variable in
the R script embedded in the stored procedure. You can also modify the script to
output different formats, more files, and so on.

Execute the stored procedure


Run the following statement to export binary plot data to JPEG and PDF file formats.

SQL

EXEC RPlotHist

Results

text

STDOUT message(s) from external script:

[1] Creating output plot files:[1]


C:\temp\plots\rHistogram_Tipped_18887f6265d4.jpg[1]

C:\temp\plots\rHistograms_Tip_and_Fare_Amount_1888441e542c.pdf[1]

C:\temp\plots\rXYPlots_Tip_vs_Fare_Amount_18887c9d517b.pdf

The numbers in the file names are randomly generated to ensure that you don't get an
error when trying to write to an existing file.

View output
To view the plot, open the destination folder and review the files that were created by
the R code in the stored procedure.

1. Go the folder indicated in the STDOUT message (in the example, this is
C:\temp\plots)

2. Open rHistogram_Tipped.jpg to show the number of trips that got a tip vs. the
trips that got no tip (this histogram is similar to the one you generated in the
previous step).

3. Open rHistograms_Tip_and_Fare_Amount.pdf to view distribution of tip amounts,


plotted against the fare amounts.

4. Open rXYPlots_Tip_vs_Fare_Amount.pdf to view a scatterplot with the fare amount


on the x-axis and the tip amount on the y-axis.
Next steps
In this article, you:

" Reviewed the sample data


" Created plots using R in T-SQL
" Output plots in multiple file formats

R tutorial: Create data features


R tutorial: Create data features
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part three of this five-part tutorial series, you'll learn how to create features from raw
data by using a Transact-SQL function. You'll then call that function from a SQL stored
procedure to create a table that contains the feature values.

In this article, you'll:

" Modify a custom function to calculate trip distance


" Save the features using another custom function

In part one, you installed the prerequisites and restored the sample database.

In part two, you reviewed the sample data and generated some plots.

In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

About feature engineering


After several rounds of data exploration, you have collected some insights from the
data, and are ready to move on to feature engineering. This process of creating
meaningful features from the raw data is a critical step in creating analytical models.

In this dataset, the distance values are based on the reported meter distance, and don't
necessarily represent geographical distance or the actual distance traveled. Therefore,
you'll need to calculate the direct distance between the pick-up and drop-off points, by
using the coordinates available in the source NYC Taxi dataset. You can do this by using
the Haversine formula in a custom Transact-SQL function.

You'll use one custom T-SQL function, fnCalculateDistance, to compute the distance
using the Haversine formula, and use a second custom T-SQL function,
fnEngineerFeatures, to create a table containing all the features.
The overall process is as follows:

Create the T-SQL function that performs the calculations

Call the function to generate the feature data

Save the feature data to a table

Calculate trip distance using


fnCalculateDistance
The function fnCalculateDistance should have been downloaded and registered with
SQL Server as part of the preparation for this tutorial. Take a minute to review the code.

1. In Management Studio, expand Programmability, expand Functions and then


Scalar-valued functions.

2. Right-click fnCalculateDistance, and select Modify to open the Transact-SQL script


in a new query window.

SQL

CREATE FUNCTION [dbo].[fnCalculateDistance] (@Lat1 float, @Long1 float,


@Lat2 float, @Long2 float)

-- User-defined function that calculates the direct distance between


two geographical coordinates.

RETURNS float

AS

BEGIN

DECLARE @distance decimal(28, 10)

-- Convert to radians

SET @Lat1 = @Lat1 / 57.2958

SET @Long1 = @Long1 / 57.2958

SET @Lat2 = @Lat2 / 57.2958

SET @Long2 = @Long2 / 57.2958

-- Calculate distance

SET @distance = (SIN(@Lat1) * SIN(@Lat2)) + (COS(@Lat1) * COS(@Lat2)


* COS(@Long2 - @Long1))

--Convert to miles

IF @distance <> 0

BEGIN

SET @distance = 3958.75 * ATAN(SQRT(1 - POWER(@distance, 2)) /


@distance);

END

RETURN @distance

END

GO

The function is a scalar-valued function, returning a single data value of a


predefined type.

It takes latitude and longitude values as inputs, obtained from trip pick-up
and drop-off locations. The Haversine formula converts locations to radians
and uses those values to compute the direct distance in miles between those
two locations.

Generate the features using fnEngineerFeatures


To add the computed values to a table that can be used for training the model, you'll
use another function, fnEngineerFeatures. The new function calls the previously created
T-SQL function, fnCalculateDistance, to get the direct distance between pick-up and
drop-off locations.

1. Take a minute to review the code for the custom T-SQL function,
fnEngineerFeatures, which should have been created for you as part of the
preparation for this walkthrough.

SQL

CREATE FUNCTION [dbo].[fnEngineerFeatures] (

@passenger_count int = 0,

@trip_distance float = 0,

@trip_time_in_secs int = 0,

@pickup_latitude float = 0,

@pickup_longitude float = 0,

@dropoff_latitude float = 0,

@dropoff_longitude float = 0)

RETURNS TABLE

AS

RETURN

-- Add the SELECT statement with parameter references here

SELECT

@passenger_count AS passenger_count,

@trip_distance AS trip_distance,

@trip_time_in_secs AS trip_time_in_secs,

[dbo].[fnCalculateDistance](@pickup_latitude, @pickup_longitude,
@dropoff_latitude, @dropoff_longitude) AS direct_distance

GO

This table-valued function that takes multiple columns as inputs, and outputs
a table with multiple feature columns.
The purpose of this function is to create new features for use in building a
model.

2. To verify that this function works, use it to calculate the geographical distance for
those trips where the metered distance was 0 but the pick-up and drop-off
locations were different.

SQL

SELECT tipped, fare_amount, passenger_count,(trip_time_in_secs/60)


as TripMinutes,

trip_distance, pickup_datetime, dropoff_datetime,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) AS direct_distance

FROM nyctaxi_sample

WHERE pickup_longitude != dropoff_longitude and pickup_latitude !=


dropoff_latitude and trip_distance = 0

ORDER BY trip_time_in_secs DESC

As you can see, the distance reported by the meter doesn't always correspond to
geographical distance. This is why feature engineering is so important. You can use
these improved data features to train a machine learning model using R.

Next steps
In this article, you:

" Modified a custom function to calculate trip distance


" Saved the features using another custom function

R tutorial: Train and save model


R tutorial: Train and save model
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part four of this five-part tutorial series, you'll learn how to train a machine learning
model by using R. You'll train the model using the data features you created in the
previous part, and then save the trained model in a SQL Server table. In this case, the R
packages are already installed with R Services (In-Database), so everything can be done
from SQL.

In this article, you'll:

" Create and train a model using a SQL stored procedure


" Save the trained model to a SQL table

In part one, you installed the prerequisites and restored the sample database.

In part two, you reviewed the sample data and generate some plots.

In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.

In part five, you'll learn how to operationalize the models that you trained and saved in
part four.

Create the stored procedure


When calling R from T-SQL, you use the system stored procedure,
sp_execute_external_script. However, for processes that you repeat often, such as
retraining a model, it is easier to encapsulate the call to sp_execute_external_script in
another stored procedure.

1. In Management Studio, open a new Query window.

2. Run the following statement to create the stored procedure RTrainLogitModel.


This stored procedure defines the input data and uses glm to create a logistic
regression model.

SQL
CREATE PROCEDURE [dbo].[RTrainLogitModel] (@trained_model
varbinary(max) OUTPUT)

AS

BEGIN

DECLARE @inquery nvarchar(max) = N'

select tipped, fare_amount,


passenger_count,trip_time_in_secs,trip_distance,

pickup_datetime, dropoff_datetime,

dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance

from nyctaxi_sample

tablesample (70 percent) repeatable (98052)

'

EXEC sp_execute_external_script @language = N'R',

@script = N'

## Create model

logitObj <- glm(tipped ~ passenger_count + trip_distance +


trip_time_in_secs + direct_distance, data = InputDataSet, family =
binomial)

summary(logitObj)

## Serialize model

trained_model <- as.raw(serialize(logitObj, NULL));

',

@input_data_1 = @inquery,

@params = N'@trained_model varbinary(max) OUTPUT',

@trained_model = @trained_model OUTPUT;

END

GO

To ensure that some data is left over to test the model, 70% of the data are
randomly selected from the taxi data table for training purposes.

The SELECT query uses the custom scalar function fnCalculateDistance to


calculate the direct distance between the pick-up and drop-off locations. The
results of the query are stored in the default R input variable, InputDataset .

The R script calls the R function glm to create the logistic regression model.

The binary variable tipped is used as the label or outcome column, and the
model is fit using these feature columns: passenger_count, trip_distance,
trip_time_in_secs, and direct_distance.

The trained model, saved in the R variable logitObj , is serialized and


returned as an output parameter.
Train and deploy the R model using the stored
procedure
Because the stored procedure already includes a definition of the input data, you don't
need to provide an input query.

1. To train and deploy the R model, call the stored procedure and insert it into the
database table nyc_taxi_models, so that you can use it for future predictions:

SQL

DECLARE @model VARBINARY(MAX);

EXEC RTrainLogitModel @model OUTPUT;

INSERT INTO nyc_taxi_models (name, model) VALUES('RTrainLogit_model',


@model);

2. Watch the Messages window of Management Studio for messages that would be
piped to R's stdout stream, like this message:

"STDOUT message(s) from external script: Rows Read: 1193025, Total Rows
Processed: 1193025, Total Chunk Time: 0.093 seconds"

3. When the statement has completed, open the table nyc_taxi_models. Processing of
the data and fitting the model might take a while.

You can see that one new row has been added, which contains the serialized
model in the column model and the model name RTrainLogit_model in the column
name.

text

model name
---------------------------- ------------------

0x580A00000002000302020.... RTrainLogit_model

In the next part of this tutorial you'll use the trained model to generate predictions.

Next steps
In this article, you:

" Created and trained a model using a SQL stored procedure


" Saved the trained model to a SQL table
R tutorial: Run predictions in SQL stored procedures
R tutorial: Run predictions in SQL stored
procedures
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

In part five of this five-part tutorial series, you'll learn to operationalize the model that
you trained and saved in the previous part by using the model to predict potential
outcomes. The model is wrapped in a stored procedure which can be called directly by
other applications.

This article demonstrates two ways to perform scoring:

Batch scoring mode: Use a SELECT query as an input to the stored procedure. The
stored procedure returns a table of observations corresponding to the input cases.

Individual scoring mode: Pass a set of individual parameter values as input. The
stored procedure returns a single row or value.

In this article, you'll:

" Create and use stored procedures for batch scoring


" Create and use stored procedures for scoring a single row

In part one, you installed the prerequisites and restored the sample database.

In part two, you reviewed the sample data and generated some plots.

In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.

In part four, you loaded the modules and called the necessary functions to create and
train the model using a SQL Server stored procedure.

Basic scoring
The stored procedure RPredict illustrates the basic syntax for wrapping a PREDICT call in
a stored procedure.

SQL
CREATE PROCEDURE [dbo].[RPredict] (@model varchar(250), @inquery
nvarchar(max))

AS

BEGIN

DECLARE @lmodel2 varbinary(max) = (SELECT model FROM nyc_taxi_models WHERE


name = @model);

EXEC sp_execute_external_script @language = N'R',

@script = N'

mod <- unserialize(as.raw(model));

print(summary(mod))

OutputDataSet <- data.frame(predict(mod, InputDataSet, type =


"response"));

str(OutputDataSet)

print(OutputDataSet)

',

@input_data_1 = @inquery,

@params = N'@model varbinary(max)',

@model = @lmodel2

WITH RESULT SETS (("Score" float));

END

GO

The SELECT statement gets the serialized model from the database, and stores the
model in the R variable mod for further processing using R.

The new cases for scoring are obtained from the Transact-SQL query specified in
@inquery , the first parameter to the stored procedure. As the query data is read,

the rows are saved in the default data frame, InputDataSet . This data frame is
passed to the PREDICT function which generates the scores.

OutputDataSet <- data.frame(predict(mod, InputDataSet, type = "response"));

Because a data.frame can contain a single row, you can use the same code for
batch or single scoring.

The value returned by the PREDICT function is a float that represents the
probability that the driver gets a tip of any amount.

Batch scoring (a list of predictions)


A more common scenario is to generate predictions for multiple observations in batch
mode. In this step, let's see how batch scoring works.

1. Start by getting a smaller set of input data to work with. This query creates a "top
10" list of trips with passenger count and other features needed to make a
prediction.

SQL

SELECT TOP 10 a.passenger_count AS passenger_count, a.trip_time_in_secs


AS trip_time_in_secs, a.trip_distance AS trip_distance,
a.dropoff_datetime AS dropoff_datetime,
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude,dropoff_longitude) AS direct_distance

FROM (SELECT medallion, hack_license, pickup_datetime,


passenger_count,trip_time_in_secs,trip_distance, dropoff_datetime,
pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude
FROM nyctaxi_sample)a

LEFT OUTER JOIN

(SELECT medallion, hack_license, pickup_datetime FROM nyctaxi_sample


TABLESAMPLE (70 percent) REPEATABLE (98052) )b

ON a.medallion=b.medallion AND a.hack_license=b.hack_license

AND a.pickup_datetime=b.pickup_datetime

WHERE b.medallion IS NULL

Sample results

text

passenger_count trip_time_in_secs trip_distance dropoff_datetime


direct_distance

1 283 0.7 2013-03-27


14:54:50.000 0.5427964547

1 289 0.7 2013-02-24


12:55:29.000 0.3797099614

1 214 0.7 2013-06-26


13:28:10.000 0.6970098661

2. Create a stored procedure called RPredictBatchOutput in Management Studio.

SQL

CREATE PROCEDURE [dbo].[RPredictBatchOutput] (@model varchar(250),


@inquery nvarchar(max))
AS

BEGIN

DECLARE @lmodel2 varbinary(max) = (SELECT model FROM nyc_taxi_models


WHERE name = @model);

EXEC sp_execute_external_script

@language = N'R',

@script = N'

mod <- unserialize(as.raw(model));

print(summary(mod))

OutputDataSet <- data.frame(predict(mod, InputDataSet, type =


"response"));

str(OutputDataSet)

print(OutputDataSet)

',

@input_data_1 = @inquery,

@params = N'@model varbinary(max)',

@model = @lmodel2

WITH RESULT SETS ((Score float));

END

3. Provide the query text in a variable and pass it as a parameter to the stored
procedure:

SQL

-- Define the input data

DECLARE @query_string nvarchar(max)

SET @query_string='SELECT TOP 10 a.passenger_count as passenger_count,


a.trip_time_in_secs AS trip_time_in_secs, a.trip_distance AS
trip_distance, a.dropoff_datetime AS dropoff_datetime,
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude,dropoff_longitude) AS direct_distance FROM (SELECT
medallion, hack_license, pickup_datetime,
passenger_count,trip_time_in_secs,trip_distance, dropoff_datetime,
pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude
FROM nyctaxi_sample )a LEFT OUTER JOIN (SELECT medallion,
hack_license, pickup_datetime FROM nyctaxi_sample TABLESAMPLE (70
percent) REPEATABLE (98052))b ON a.medallion=b.medallion AND
a.hack_license=b.hack_license AND a.pickup_datetime=b.pickup_datetime
WHERE b.medallion is null'

-- Call the stored procedure for scoring and pass the input data

EXEC [dbo].[RPredictBatchOutput] @model = 'RTrainLogit_model', @inquery


= @query_string;

The stored procedure returns a series of values representing the prediction for each of
the top 10 trips. However, the top trips are also single-passenger trips with a relatively
short trip distance, for which the driver is unlikely to get a tip.

 Tip

Rather than returning just the "yes-tip" and "no-tip" results, you could also return
the probability score for the prediction, and then apply a WHERE clause to the
Score column values to categorize the score as "likely to tip" or "unlikely to tip",
using a threshold value such as 0.5 or 0.7. This step is not included in the stored
procedure but it would be easy to implement.
Single-row scoring of multiple inputs
Sometimes you want to pass in multiple input values and get a single prediction based
on those values. For example, you could set up an Excel worksheet, web application, or
Reporting Services report to call the stored procedure and provide inputs typed or
selected by users from those applications.

In this section, you learn how to create single predictions using a stored procedure that
takes multiple inputs, such as passenger count, trip distance, and so forth. The stored
procedure creates a score based on the previously stored R model.

If you call the stored procedure from an external application, make sure that the data
matches the requirements of the R model. This might include ensuring that the input
data can be cast or converted to an R data type, or validating data type and data length.

1. Create a stored procedure RPredictSingleRow.

SQL

CREATE PROCEDURE [dbo].[RPredictSingleRow] @model varchar(50),


@passenger_count int = 0, @trip_distance float = 0, @trip_time_in_secs
int = 0, @pickup_latitude float = 0, @pickup_longitude float = 0,
@dropoff_latitude float = 0, @dropoff_longitude float = 0

AS

BEGIN

DECLARE @inquery nvarchar(max) = N'SELECT * FROM [dbo].


[fnEngineerFeatures](@passenger_count, @trip_distance,
@trip_time_in_secs, @pickup_latitude, @pickup_longitude,
@dropoff_latitude, @dropoff_longitude)';

DECLARE @lmodel2 varbinary(max) = (SELECT model FROM nyc_taxi_models


WHERE name = @model);

EXEC sp_execute_external_script
@language = N'R',

@script = N'

mod <- unserialize(as.raw(model));

print(summary(mod));

OutputDataSet <- data.frame(predict(mod, InputDataSet, type =


"response"));

str(OutputDataSet);

print(OutputDataSet);

',

@input_data_1 = @inquery,

@params = N'@model varbinary(max),@passenger_count int,@trip_distance


float,@trip_time_in_secs int , @pickup_latitude float
,@pickup_longitude float ,@dropoff_latitude float ,@dropoff_longitude
float', @model = @lmodel2, @passenger_count =@passenger_count,
@trip_distance=@trip_distance, @trip_time_in_secs=@trip_time_in_secs,
@pickup_latitude=@pickup_latitude, @pickup_longitude=@pickup_longitude,
@dropoff_latitude=@dropoff_latitude,
@dropoff_longitude=@dropoff_longitude

WITH RESULT SETS ((Score float));

END

2. Try it out, by providing the values manually.

Open a new Query window, and call the stored procedure, providing values for
each of the parameters. The parameters represent feature columns used by the
model and are required.

SQL

EXEC [dbo].[RPredictSingleRow] @model = 'RTrainLogit_model',

@passenger_count = 1,

@trip_distance = 2.5,

@trip_time_in_secs = 631,

@pickup_latitude = 40.763958,

@pickup_longitude = -73.973373,

@dropoff_latitude = 40.782139,

@dropoff_longitude = -73.977303

Or, use this shorter form supported for parameters to a stored procedure:

SQL

EXEC [dbo].[RPredictSingleRow] 'RTrainLogit_model', 1, 2.5, 631,


40.763958,-73.973373, 40.782139,-73.977303

3. The results indicate that the probability of getting a tip is low (zero) on these top
10 trips, since all are single-passenger trips over a relatively short distance.

Conclusions
Now that you have learned to embed R code in stored procedures, you can extend
these practices to build models of your own. The integration with Transact-SQL makes it
much easier to deploy R models for prediction and to incorporate model retraining as
part of an enterprise data workflow.

Next steps
In this article, you:

" Created and used stored procedures for batch scoring


" Created and used stored procedures for scoring a single row

For more information about R, see R extension in SQL Server.


Plot histograms in Python
Article • 12/23/2022

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

This article describes how to plot data using the Python package pandas'.hist() . A SQL
database is the source used to visualize the histogram data intervals that have
consecutive, non-overlapping values.

Prerequisites
Azure SQL Managed Instance

SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.

Azure Data Studio. To install, see Azure Data Studio.

Restore sample DW database to get sample data used in this article.

Verify restored database


You can verify that the restored database exists by querying the Person.CountryRegion
table:

SQL

USE AdventureWorksDW;

SELECT * FROM Person.CountryRegion;

Install Python packages


Download and Install Azure Data Studio.

Install the following Python packages:

pyodbc
pandas

sqlalchemy

matplotlib
To install these packages:

1. In your Azure Data Studio notebook, select Manage Packages.


2. In the Manage Packages pane, select the Add new tab.
3. For each of the following packages, enter the package name, select Search, then
select Install.

Plot histogram
The distributed data displayed in the histogram is based on a SQL query from
AdventureWorksDW . The histogram visualizes data and the frequency of data values.

Edit the connection string variables: 'server', 'database', 'username', and 'password' to
connect to SQL Server database.

To create a new notebook:

1. In Azure Data Studio, select File, select New Notebook.


2. In the notebook, select kernel Python3, select the +code.
3. Paste code in notebook, select Run All.

Python

import pyodbc

import pandas as pd

import matplotlib

import sqlalchemy

from sqlalchemy import create_engine

matplotlib.use('TkAgg', force=True)

from matplotlib import pyplot as plt

# Some other example server values are

# server = 'localhost\sqlexpress' # for a named instance

# server = 'myserver,port' # to specify an alternate port

server = 'servername'

database = 'AdventureWorksDW2019'

username = 'yourusername'

password = 'databasename'

url = 'mssql+pyodbc://{user}:{passwd}@{host}:{port}/{db}?
driver=SQL+Server'.format(user=username, passwd=password, host=server,
port=port, db=database)

engine = create_engine(url)

sql = "SELECT DATEDIFF(year, c.BirthDate, GETDATE()) AS Age FROM [dbo].


[FactInternetSales] s INNER JOIN dbo.DimCustomer c ON s.CustomerKey =
c.CustomerKey"

df = pd.read_sql(sql, engine)

df.hist(bins=50)

plt.show()

The display shows the age distribution of customers in the FactInternetSales table.
Insert data from a SQL table into a
Python pandas dataframe
Article • 02/28/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

This article describes how to insert SQL data into a pandas dataframe using the
pyodbc package in Python. The rows and columns of data contained within the
dataframe can be used for further data exploration.

Prerequisites
Azure SQL Managed Instance

SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.

Azure Data Studio. To install, see Azure Data Studio.

Restore sample database to get sample data used in this article.

Verify restored database


You can verify that the restored database exists by querying the Person.CountryRegion
table:

SQL

USE AdventureWorks;

SELECT * FROM Person.CountryRegion;

Install Python packages


Download and Install Azure Data Studio.

Install the following Python packages:

pyodbc
pandas

To install these packages:


1. In your Azure Data Studio notebook, select Manage Packages.
2. In the Manage Packages pane, select the Add new tab.
3. For each of the following packages, enter the package name, click Search, then
click Install.

Insert data
Use the following script to select data from Person.CountryRegion table and insert into a
dataframe. Edit the connection string variables: 'server', 'database', 'username', and
'password' to connect to SQL.

To create a new notebook:

1. In Azure Data Studio, select File, select New Notebook.


2. In the notebook, select kernel Python3, select the +code.
3. Paste code in notebook, select Run All.

Python

import pyodbc

import pandas as pd

# Some other example server values are

# server = 'localhost\sqlexpress' # for a named instance

# server = 'myserver,port' # to specify an alternate port

server = 'servername'

database = 'AdventureWorks'

username = 'yourusername'

password = 'databasename'

cnxn = pyodbc.connect('DRIVER={SQL
Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+
password)

cursor = cnxn.cursor()

# select 26 rows from SQL table to insert in dataframe.

query = "SELECT [CountryRegionCode], [Name] FROM Person.CountryRegion;"

df = pd.read_sql(query, cnxn)

print(df.head(26))

Output

The print command in the preceding script displays the rows of data from the pandas
dataframe df .

text

CountryRegionCode Name

0 AF Afghanistan

1 AL Albania

2 DZ Algeria

3 AS American Samoa

4 AD Andorra

5 AO Angola

6 AI Anguilla

7 AQ Antarctica

8 AG Antigua and Barbuda

9 AR Argentina

10 AM Armenia

11 AW Aruba

12 AU Australia

13 AT Austria

14 AZ Azerbaijan

15 BS Bahamas, The

16 BH Bahrain

17 BD Bangladesh

18 BB Barbados

19 BY Belarus

20 BE Belgium

21 BZ Belize

22 BJ Benin

23 BM Bermuda

24 BT Bhutan

25 BO Bolivia

Next steps
Insert Python dataframe into SQL
Insert Python dataframe into SQL table
Article • 02/28/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

This article describes how to insert a pandas dataframe into a SQL database using the
pyodbc package in Python.

Prerequisites
Azure SQL Managed Instance

SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.

Azure Data Studio. To install, see Download and install Azure Data Studio.

Follow the steps in AdventureWorks sample databases to restore the OLTP version
of the AdventureWorks sample database for your version of SQL Server.

You can verify that the database was restored correctly by querying the
HumanResources.Department table:

SQL

USE AdventureWorks;

SELECT * FROM HumanResources.Department;

Install Python packages


1. In Azure Data Studio, open a new notebook and connect to the Python 3 kernel.

2. Select Manage Packages.

3. In the Manage Packages pane, select the Add new tab.

4. For each of the following packages, enter the package name, click Search, then
click Install.

pyodbc
pandas

Create a sample CSV file


Copy the following text and save it to a file named department.csv .

text

DepartmentID,Name,GroupName,

1,Engineering,Research and Development,

2,Tool Design,Research and Development,

3,Sales,Sales and Marketing,

4,Marketing,Sales and Marketing,

5,Purchasing,Inventory Management,

6,Research and Development,Research and Development,

7,Production,Manufacturing,

8,Production Control,Manufacturing,

9,Human Resources,Executive General and Administration,

10,Finance,Executive General and Administration,

11,Information Services,Executive General and Administration,

12,Document Control,Quality Assurance,

13,Quality Assurance,Quality Assurance,

14,Facilities and Maintenance,Executive General and Administration,

15,Shipping and Receiving,Inventory Management,

16,Executive,Executive General and Administration

Create a new database table


1. Follow the steps in Connect to a SQL Server to connect to the AdventureWorks
database.

2. Create a table named HumanResources.DepartmentTest. The SQL table will be


used for the dataframe insertion.

SQL

CREATE TABLE [HumanResources].[DepartmentTest](

[DepartmentID] [smallint] NOT NULL,

[Name] [dbo].[Name] NOT NULL,

[GroupName] [dbo].[Name] NOT NULL

GO

Load a dataframe from the CSV file


Use the Python pandas package to create a dataframe, load the CSV file, and then load
the dataframe into the new SQL table, HumanResources.DepartmentTest.

1. Connect to the Python 3 kernel.

2. Paste the following code into a code cell, updating the code with the correct values
for server , database , username , password , and the location of the CSV file.

Python

import pyodbc

import pandas as pd

# insert data from csv file into dataframe.

# working directory for csv file: type "pwd" in Azure Data Studio or
Linux

# working directory in Windows c:\users\username

df = pd.read_csv("c:\\user\\username\department.csv")

# Some other example server values are

# server = 'localhost\sqlexpress' # for a named instance

# server = 'myserver,port' # to specify an alternate port

server = 'yourservername'

database = 'AdventureWorks'

username = 'username'

password = 'yourpassword'

cnxn = pyodbc.connect('DRIVER={SQL
Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+
password)

cursor = cnxn.cursor()

# Insert Dataframe into SQL Server:

for index, row in df.iterrows():

cursor.execute("INSERT INTO HumanResources.DepartmentTest


(DepartmentID,Name,GroupName) values(?,?,?)", row.DepartmentID,
row.Name, row.GroupName)

cnxn.commit()

cursor.close()

3. Run the cell.

Confirm data in the database


Connect to the SQL kernel and AdventureWorks database and run the following SQL
statement to confirm the table was successfully loaded with data from the dataframe.

SQL

SELECT count(*) from HumanResources.DepartmentTest;

Results
Bash

(No column name)

16

Next steps
Plot a histogram for data exploration with Python
Data type mappings between Python
and SQL Server
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

This article lists the supported data types, and the data type conversions performed,
when using the Python integration feature in SQL Server Machine Learning Services.

Python supports a limited number of data types in comparison to SQL Server. As a


result, whenever you use data from SQL Server in Python scripts, SQL data might be
implicitly converted to a compatible Python data type. However, often an exact
conversion cannot be performed automatically and an error is returned.

Python and SQL Data Types


This table lists the implicit conversions that are provided. Other data types are not
supported.

SQL type Python Description


type

bigint float64

binary bytes

bit bool

char str

date datetime

datetime datetime Supported with SQL Server 2017 CU6 and above (with NumPy
arrays of type datetime.datetime or Pandas pandas.Timestamp ).
sp_execute_external_script now supports datetime types with
fractional seconds.

float float64

nchar str

nvarchar str

nvarchar(max) str
SQL type Python Description
type

real float64

smalldatetime datetime

smallint int32

tinyint int32

uniqueidentifier str

varbinary bytes

varbinary(max) bytes

varchar(n) str

varchar(max) str

See also
Data type mappings between R and SQL Server
Data type mappings between R and SQL
Server
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

This article lists the supported data types, and the data type conversions performed,
when using the R integration feature in SQL Server Machine Learning Services.

Base R version
SQL Server 2016 R Services and SQL Server Machine Learning Services with R are
aligned with specific releases of Microsoft R Open. For example, the latest release, SQL
Server 2019 Machine Learning Services, is built on Microsoft R Open 3.5.2.

To view the R version associated with a particular instance of SQL Server, open RGui in
the SQL instance. For example, the path for the default instance in SQL Server 2019
would be: C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\R_SERVICES\bin\x64\Rgui.exe .

The tool loads base R and other libraries. Package version information is provided in a
notification for each package that is loaded at session start up.

R and SQL Data Types


While SQL Server supports several dozen data types, R has a limited number of scalar
data types (numeric, integer, complex, logical, character, date/time, and raw). As a result,
whenever you use data from SQL Server in R scripts, data might be implicitly converted
to a compatible data type. However, often an exact conversion cannot be performed
automatically, and an error is returned, such as "Unhandled SQL data type".

This section lists the implicit conversions that are provided, and lists unsupported data
types. Some guidance is provided for mapping data types between R and SQL Server.

Implicit data type conversions


The following table shows the changes in data types and values when data from SQL
Server is used in an R script and then returned to SQL Server.
SQL type R class RESULT SET Comments
type

bigint numeric float Executing an R script with


sp_execute_external_script allows bigint data
type as input data. However, because they are
converted to R's numeric type, it suffers a
precision loss with values that are very high or
have decimal point values. R only support up to
53-bit integers and then it will start to have
precision loss.

binary(n)
raw varbinary(max) Only allowed as input parameter and output
n <= 8000

bit logical bit

char(n)
character varchar(max) The input data frame (input_data_1) are created
n <= 8000 without explicitly setting of stringsAsFactors
parameter so the column type will depend on
the default.stringsAsFactors() in R

datetime POSIXct datetime Represented as GMT

date POSIXct datetime Represented as GMT

decimal(p,s) numeric float Executing an R script with


sp_execute_external_script allows decimal
data type as input data. However, because they
are converted to R's numeric type, it suffers a
precision loss with values that are very high or
have decimal point values.
sp_execute_external_script with an R script
does not support the full range of the data type
and would alter the last few decimal digits
especially those with fraction.

float numeric float

int integer int

money numeric float Executing an R script with


sp_execute_external_script allows money data
type as input data. However, because they are
converted to R's numeric type, it suffers a
precision loss with values that are very high or
have decimal point values. Sometimes cent
values would be imprecise and a warning would
be issued: Warning: unable to precisely
represent cents values.
SQL type R class RESULT SET Comments
type

numeric(p,s) numeric float Executing an R script with


sp_execute_external_script allows numeric
data type as input data. However, because they
are converted to R's numeric type, it suffers a
precision loss with values that are very high or
have decimal point values.
sp_execute_external_script with an R script
does not support the full range of the data type
and would alter the last few decimal digits
especially those with fraction.

real numeric float

smalldatetime POSIXct datetime Represented as GMT

smallint integer int

smallmoney numeric float

tinyint integer int

uniqueidentifier character varchar(max)

varbinary(n)
raw varbinary(max) Only allowed as input parameter and output
n <= 8000

varbinary(max) raw varbinary(max) Only allowed as input parameter and output

varchar(n)
character varchar(max) The input data frame (input_data_1) are created
n <= 8000 without explicitly setting of stringsAsFactors
parameter so the column type will depend on
the default.stringsAsFactors() in R

Data types not supported by R


Of the categories of data types supported by the SQL Server type system, the following
types are likely to pose problems when passed to R code:

Data types listed in the Other section of the SQL type system article: cursor,
timestamp, hierarchyid, uniqueidentifier, sql_variant, xml, table
All spatial types
image

Data types that might convert poorly


Most datetime types should work, except for datetimeoffset.
Most numeric data types are supported, but conversions might fail for money and
smallmoney.
varchar is supported, but because SQL Server uses Unicode as a rule, use of
nvarchar and other Unicode text data types is recommended where possible.
Functions from the RevoScaleR library prefixed with rx can handle the SQL binary
data types (binary and varbinary), but in most scenarios special handling will be
required for these types. Most R code cannot work with binary columns.

For more information about SQL Server data types, see Data Types (Transact-SQL)

Changes in data types between SQL Server


versions
Microsoft SQL Server 2016 and later include improvements in data type conversions and
in several other operations. Most of these improvements offer increased precision when
you deal with floating-point types, as well as minor changes to operations on classic
datetime types.

These improvements are all available by default when you use a database compatibility
level of 130 or later. However, if you use a different compatibility level, or connect to a
database using an older version, you might see differences in the precision of numbers
or other results.

For more information, see SQL Server 2016 improvements in handling some data types
and uncommon operations .

Verify R and SQL data schemas in advance


In general, whenever you have any doubt about how a particular data type or data
structure is being used in R, use the str() function to get the internal structure and
type of the R object. The result of the function is printed to the R console and is also
available in the query results, in the Messages tab in Management Studio.

When retrieving data from a database for use in R code, you should always eliminate
columns that cannot be used in R, as well as columns that are not useful for analysis,
such as GUIDS (uniqueidentifier), timestamps and other columns used for auditing, or
lineage information created by ETL processes.

Note that inclusion of unnecessary columns can greatly reduce the performance of R
code, especially if high cardinality columns are used as factors. Therefore, we
recommend that you use SQL Server system stored procedures and information views to
get the data types for a given table in advance, and eliminate or convert incompatible
columns. For more information, see Information Schema Views in Transact-SQL

If a particular SQL Server data type is not supported by R, but you need to use the
columns of data in the R script, we recommend that you use the CAST and CONVERT
(Transact-SQL) functions to ensure that the data type conversions are performed as
intended before using the data in your R script.

2 Warning

If you use the rxDataStep to drop incompatible columns while moving data, be
aware that the arguments varsToKeep and varsToDrop are not supported for the
RxSqlServerData data source type.

Examples

Example 1: Implicit conversion


The following example demonstrates how data is transformed when making the round-
trip between SQL Server and R.

The query gets a series of values from a SQL Server table, and uses the stored procedure
sp_execute_external_script to output the values using the R runtime.

SQL

CREATE TABLE MyTable (

c1 int,

c2 varchar(10),

c3 uniqueidentifier

);

go

INSERT MyTable VALUES(1, 'Hello', newid());

INSERT MyTable VALUES(-11, 'world', newid());

SELECT * FROM MyTable;

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

inputDataSet["cR"] <- c(4, 2)


str(inputDataSet)

outputDataSet <- inputDataSet'

, @input_data_1 = N'SELECT c1, c2, c3 FROM MyTable'

, @input_data_1_name = N'inputDataSet'

, @output_data_1_name = N'outputDataSet'

WITH RESULT SETS((C1 int, C2 varchar(max), C3 varchar(max), C4 float));

Results

Row # C1 C2 C3 C4

1 1 Hello 6e225611-4b58-4995-a0a5-554d19012ef1 4

2 -11 world 6732ea46-2d5d-430b-8ao1-86e7f3351c3e 2

Note the use of the str function in R to get the schema of the output data. This
function returns the following information:

Output

'data.frame':2 obs. of 4 variables:

$ c1: int 1 -11

$ c2: Factor w/ 2 levels "Hello","world": 1 2

$ c3: Factor w/ 2 levels "6732EA46-2D5D-430B-8A01-86E7F3351C3E",..: 2 1

$ cR: num 4 2

From this, you can see that the following data type conversions were implicitly
performed as part of this query:

Column C1. The column is represented as int in SQL Server, integer in R, and int in
the output result set.

No type conversion was performed.

Column C2. The column is represented as varchar(10) in SQL Server, factor in R,


and varchar(max) in the output.

Note how the output changes; any string from R (either a factor or a regular string)
will be represented as varchar(max), no matter what the length of the strings is.

Column C3. The column is represented as uniqueidentifier in SQL Server,


character in R, and varchar(max) in the output.

Note the data type conversion that happens. SQL Server supports the
uniqueidentifier but R does not; therefore, the identifiers are represented as
strings.

Column C4. The column contains values generated by the R script and not present
in the original data.
Example 2: Dynamic column selection using R
The following example shows how you can use R code to check for invalid column types.
The gets the schema of a specified table using the SQL Server system views, and
removes any columns that have a specified invalid type.

connStr <- "Server=.;Database=TestDB;Trusted_Connection=Yes"

data <- RxSqlServerData(connectionString = connStr, sqlQuery = "SELECT


COLUMN_NAME FROM TestDB.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME =
N'testdata' AND DATA_TYPE <> 'image';")

columns <- rxImport(data)

columnList <- do.call(paste, c(as.list(columns$COLUMN_NAME), sep = ","))

sqlQuery <- paste("SELECT", columnList, "FROM testdata")

See also
Data type mappings between Python and SQL Server
Python Tutorial: Deploy a linear
regression model with SQL machine
learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

In part four of this four-part tutorial series, you'll deploy a linear regression model
developed in Python into an Azure SQL Managed Instance database using Machine
Learning Services.

In this article, you'll learn how to:

" Create a stored procedure that generates the machine learning model


" Store the model in a database table
" Create a stored procedure that makes predictions using the model
" Execute the model with new data

In part one, you learned how to restore the sample database.

In part two, you learned how to load the data from a database into a Python data frame,
and prepare the data in Python.

In part three, you learned how to train a linear regression machine learning model in
Python.

Prerequisites
Part four of this tutorial assumes you have completed part one and its
prerequisites.

Create a stored procedure that generates the


model
Now, using the Python scripts you developed, create a stored procedure
generate_rental_py_model that trains and generates the linear regression model using
LinearRegression from scikit-learn.

Run the following T-SQL statement in Azure Data Studio to create the stored procedure
to train the model.
SQL

-- Stored procedure that trains and generates a Python model using the
rental_data and a linear regression algorithm

DROP PROCEDURE IF EXISTS generate_rental_py_model;

go

CREATE PROCEDURE generate_rental_py_model (@trained_model varbinary(max)


OUTPUT)

AS

BEGIN

EXECUTE sp_execute_external_script

@language = N'Python'

, @script = N'

from sklearn.linear_model import LinearRegression

import pickle

df = rental_train_data

# Get all the columns from the dataframe.

columns = df.columns.tolist()

# Store the variable well be predicting on.

target = "RentalCount"

# Initialize the model class.

lin_model = LinearRegression()

# Fit the model to the training data.

lin_model.fit(df[columns], df[target])

# Before saving the model to the DB table, convert it to a binary object

trained_model = pickle.dumps(lin_model)'

, @input_data_1 = N'select "RentalCount", "Year", "Month", "Day", "WeekDay",


"Snow", "Holiday" from dbo.rental_data where Year < 2015'

, @input_data_1_name = N'rental_train_data'

, @params = N'@trained_model varbinary(max) OUTPUT'

, @trained_model = @trained_model OUTPUT;

END;

GO

Store the model in a database table


Create a table in the TutorialDB database and then save the model to the table.

1. Run the following T-SQL statement in Azure Data Studio to create a table called
dbo.rental_py_models which is used to store the model.

SQL
USE TutorialDB;

DROP TABLE IF EXISTS dbo.rental_py_models;

GO

CREATE TABLE dbo.rental_py_models (

model_name VARCHAR(30) NOT NULL DEFAULT('default model') PRIMARY


KEY,

model VARBINARY(MAX) NOT NULL

);

GO

2. Save the model to the table as a binary object, with the model name linear_model.

SQL

DECLARE @model VARBINARY(MAX);

EXECUTE generate_rental_py_model @model OUTPUT;

INSERT INTO rental_py_models (model_name, model) VALUES('linear_model',


@model);

Create a stored procedure that makes


predictions
1. Create a stored procedure py_predict_rentalcount that makes predictions using
the trained model and a set of new data. Run the T-SQL below in Azure Data
Studio.

SQL

DROP PROCEDURE IF EXISTS py_predict_rentalcount;

GO

CREATE PROCEDURE py_predict_rentalcount (@model varchar(100))

AS

BEGIN

DECLARE @py_model varbinary(max) = (select model from


rental_py_models where model_name = @model);

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

# Import the scikit-learn function to compute error.

from sklearn.metrics import mean_squared_error

import pickle

import pandas

rental_model = pickle.loads(py_model)

df = rental_score_data

# Get all the columns from the dataframe.

columns = df.columns.tolist()

# Variable you will be predicting on.

target = "RentalCount"

# Generate the predictions for the test set.

lin_predictions = rental_model.predict(df[columns])

print(lin_predictions)

# Compute error between the test predictions and the actual values.

lin_mse = mean_squared_error(lin_predictions, df[target])

#print(lin_mse)

predictions_df = pandas.DataFrame(lin_predictions)

OutputDataSet = pandas.concat([predictions_df, df["RentalCount"],


df["Month"], df["Day"], df["WeekDay"], df["Snow"], df["Holiday"],
df["Year"]], axis=1)

'

, @input_data_1 = N'Select "RentalCount", "Year" ,"Month", "Day",


"WeekDay", "Snow", "Holiday" from rental_data where Year = 2015'

, @input_data_1_name = N'rental_score_data'

, @params = N'@py_model varbinary(max)'

, @py_model = @py_model

with result sets (("RentalCount_Predicted" float, "RentalCount" float,


"Month" float,"Day" float,"WeekDay" float,"Snow" float,"Holiday" float,
"Year" float));

END;

GO

2. Create a table for storing the predictions.

SQL

DROP TABLE IF EXISTS [dbo].[py_rental_predictions];

GO

CREATE TABLE [dbo].[py_rental_predictions](

[RentalCount_Predicted] [int] NULL,

[RentalCount_Actual] [int] NULL,

[Month] [int] NULL,

[Day] [int] NULL,

[WeekDay] [int] NULL,

[Snow] [int] NULL,

[Holiday] [int] NULL,

[Year] [int] NULL

) ON [PRIMARY]

GO

3. Execute the stored procedure to predict rental counts

SQL

--Insert the results of the predictions for test set into a table

INSERT INTO py_rental_predictions

EXEC py_predict_rentalcount 'linear_model';

-- Select contents of the table

SELECT * FROM py_rental_predictions;

You should see results similar to the following.

You have successfully created, trained, and deployed a model. You then used that model
in a stored procedure to predict values based on new data.

Next steps
In part four of this tutorial series, you completed these steps:

Create a stored procedure that generates the machine learning model


Store the model in a database table
Create a stored procedure that makes predictions using the model
Execute the model with new data

To learn more about using Python with SQL machine learning, see:

Python tutorials
Modify R/Python code to run in SQL
Server (In-Database) instances
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

This article provides high-level guidance on how to modify R or Python code to run as a
SQL Server stored procedure to improve performance when accessing SQL data.

When you move R/Python code from a local IDE or other environment to SQL Server,
the code generally works without further modification. This is especially true for simple
code, such as a function that takes some inputs and returns a value. It's also easier to
port solutions that use the RevoScaleR/revoscalepy packages, which support execution
in different execution contexts with minimal changes. Note that MicrosoftML applies to
SQL Server 2016 (13.x), SQL Server 2017 (14.x), and SQL Server 2019 (15.x), and does not
appear in SQL Server 2022 (16.x).

However, your code might require substantial changes if any of the following apply:

You use libraries that access the network or that cannot be installed on SQL Server.
The code makes separate calls to data sources outside SQL Server, such as Excel
worksheets, files on shares, and other databases.
You want to parameterize the stored procedure and run the code in the @script
parameter of sp_execute_external_script.
Your original solution includes multiple steps that might be more efficient in a
production environment if executed independently, such as data preparation or
feature engineering vs. model training, scoring, or reporting.
You want to optimize performance by changing libraries, using parallel execution,
or offloading some processing to SQL Server.

Step 1. Plan requirements and resources

Packages
Determine which packages are needed and ensure that they work on SQL Server.

Install packages in advance, in the default package library used by Machine


Learning Services. User libraries are not supported.
Data sources
If you intend to embed your code in sp_execute_external_script, identify primary
and secondary data sources.

Primary data sources are large datasets, such as model training data, or input
data for predictions. Plan to map your largest dataset to the input parameter of
sp_execute_external_script.

Secondary data sources are typically smaller data sets, such as lists of factors, or
additional grouping variables.

Currently, sp_execute_external_script supports only a single dataset as input to the


stored procedure. However, you can add multiple scalar or binary inputs.

Stored procedure calls preceded by EXECUTE cannot be used as an input to


sp_execute_external_script. You can use queries, views, or any other valid SELECT
statement.

Determine the outputs you need. If you run code using sp_execute_external_script,
the stored procedure can output only one data frame as a result. However, you can
also output multiple scalar outputs, including plots and models in binary format, as
well as other scalar values derived from code or SQL parameters.

Data types
For a detailed look at the data type mappings between R/Python and SQL Server, see
these articles:

Data type mappings between R and SQL Server


Data type mappings between Python and SQL Server

Take a look at the data types used in your R/Python code and do the following:

Make a checklist of possible data type issues.

All R/Python data types are supported by SQL Server Machine Learning Services.
However, SQL Server supports a greater variety of data types than does R or
Python. Therefore, some implicit data type conversions are performed when
moving SQL Server data to and from your code. You might need to explicitly cast
or convert some data.

NULL values are supported. However, R uses the na data construct to represent a
missing value, which is similar to a null.
Consider eliminating dependency on data that cannot be used by R: for example,
rowid and GUID data types from SQL Server cannot be consumed by R and will
generate errors.

Step 2. Convert or repackage code


How much you change your code depends on whether you intend to submit the code
from a remote client to run in the SQL Server compute context, or intend to deploy the
code as part of a stored procedure. The latter can provide better performance and data
security, though it imposes some additional requirements.

Define your primary input data as a SQL query wherever possible to avoid data
movement.

When running code in a stored procedure, you can pass through multiple scalar
inputs. For any parameters that you want to use in the output, add the OUTPUT
keyword.

For example, the following scalar input @model_name contains the model name,
which is also later modified by the R script, and output in its own column in the
results:

SQL

-- declare a local scalar variable which will be passed into the R


script

DECLARE @local_model_name AS NVARCHAR (50) = 'DefaultModel';

-- The below defines an OUTPUT variable in the scope of the R script,


called model_name

-- Syntactically, it is defined by using the @model_name name. Be aware


that the sequence

-- of these parameters is very important. Mandatory parameters to


sp_execute_external_script

-- must appear first, followed by the additional parameter definitions


like @params, etc.

EXECUTE sp_execute_external_script @language = N'R', @script = N'

model_name <- "Model name from R script"

OutputDataSet <- data.frame(InputDataSet$c1, model_name)'

, @input_data_1 = N'SELECT 1 AS c1'

, @params = N'@model_name nvarchar(50) OUTPUT'

, @model_name = @local_model_name OUTPUT;

-- optionally, examine the new value for the local variable:

SELECT @local_model_name;

Any variables that you pass in as parameters of the stored procedure


sp_execute_external_script must be mapped to variables in the code. By default,
variables are mapped by name. All columns in the input dataset must also be
mapped to variables in the script.

For example, assume your R script contains a formula like this one:

formula <- ArrDelay ~ CRSDepTime + DayOfWeek + CRSDepHour:DayOfWeek

An error is raised if the input dataset does not contain columns with the matching
names ArrDelay, CRSDepTime, DayOfWeek, CRSDepHour, and DayOfWeek.

In some cases, an output schema must be defined in advance for the results.

For example, to insert the data into a table, you must use the WITH RESULT SET
clause to specify the schema.

The output schema is also required if the script uses the argument @parallel=1 .
The reason is that multiple processes might be created by SQL Server to run the
query in parallel, with the results collected at the end. Therefore, the output
schema must be prepared before the parallel processes can be created.

In other cases, you can omit the result schema by using the option WITH RESULT
SETS UNDEFINED. This statement returns the dataset from the script without
naming the columns or specifying the SQL data types.

Consider generating timing or tracking data using T-SQL rather than R/Python.

For example, you could pass the system time or other information used for
auditing and storage by adding a T-SQL call that's passed through to the results,
rather than generating similar data in the script.

Improve performance and security


Run all queries in advance, and review the SQL Server query plans to identify tasks
that can be performed in parallel.

If the input query can be parallelized, set @parallel=1 as part of your arguments to
sp_execute_external_script.

Parallel processing with this flag is typically possible any time that SQL Server can
work with partitioned tables or distribute a query among multiple processes and
aggregate the results at the end. Parallel processing with this flag is typically not
possible if you're training models using algorithms that require all data to be read,
or if you need to create aggregates.

Review your code to determine if there are steps that can be performed
independently, or performed more efficiently, by using a separate stored
procedure call. For example, you might get better performance by doing feature
engineering or feature extraction separately and saving the values to a table.

Look for ways to use T-SQL rather than R/Python code for set-based
computations.

Consult with a database developer to determine ways to improve performance by


using SQL Server features such as memory-optimized tables, or, if you have
Enterprise Edition, Resource Governor.

If you're using R, then if possible replace conventional R functions with RevoScaleR


functions that support distributed execution. For more information, see
Comparison of Base R and RevoScaleR Functions.

Step 3. Prepare for deployment


Notify the administrator so that packages can be installed and tested in advance of
deploying your code.

In a development environment, it might be okay to install packages as part of your


code, but this is a bad practice in a production environment.

User libraries are not supported, regardless of whether you're using a stored
procedure or running R/Python code in the SQL Server compute context.

Package your R/Python code in a stored procedure


Create a T-SQL user-defined function, embedding your code using the sp-execute-
external-script statement.

If you have complex R code, use the R package sqlrutils to convert your code. This
package is designed to help experienced R users write good stored procedure
code.
You rewrite your R code as a single function with clearly defined inputs and
outputs, then use the sqlrutils package to generate the input and outputs in the
correct format. The sqlrutils package generates the complete stored procedure
code for you, and can also register the stored procedure in the database.
For more information and examples, see sqlrutils (SQL).

Integrate with other workflows


Leverage T-SQL tools and ETL processes. Perform feature engineering, feature
extraction, and data cleansing in advance as part of data workflows.

When you're working in a dedicated development environment, you might pull


data to your computer, analyze the data iteratively, and then write out or display
the results.
However, when standalone code is migrated to SQL Server, much of
this process can be simplified or delegated to other SQL Server tools.

Use secure, asynchronous visualization strategies.

Users of SQL Server often cannot access files on the server, and SQL client tools
typically do not support the R/Python graphics devices. If you generate plots or
other graphics as part of the solution, consider exporting the plots as binary data
and saving to a table, or writing.

Wrap prediction and scoring functions in stored procedures for direct access by
applications.

Next steps
To view examples of how R and Python solutions can be deployed in SQL Server, see
these tutorials:

R tutorials
Develop a predictive model in R with SQL machine learning

Predict NYC taxi fares with binary classification

Python tutorials
Predict ski rental with linear regression with SQL machine learning

Predict NYC taxi fares with binary classification


Convert R code to a stored procedure
using sqlrutils
Article • 11/18/2022

This article describes the steps for using the sqlrutils package to convert your R code to
run as a T-SQL stored procedure. For best possible results, your code might need to be
modified somewhat to ensure that all inputs can be parameterized.

Step 1. Rewrite R Script


For the best results, you should rewrite your R code to encapsulate it as a single
function.

All variables used by the function should be defined inside the function, or should be
defined as input parameters. See the sample code in this article.

Also, because the input parameters for the R function will become the input parameters
of the SQL stored procedure, you must ensure that your inputs and outputs conform to
the following type requirements:

Inputs
Among the input parameters, there can be at most one data frame.

The objects inside the data frame, as well as all other input parameters of the function,
must be of the following R data types:

POSIXct
numeric
character
integer
logical
raw

If an input type is not one of the above types, it needs to be serialized and passed into
the function as raw. In this case, the function must also include code to deserialize the
input.

Outputs
The function can output one of the following:

A data frame containing the supported data types. All objects in the data frame
must use one of the supported data types.
A named list, containing at most one data frame. All members of the list should
use one of the supported data types.
A NULL, if your function does not return any result

Step 2. Generate Required Objects


After your R code has been cleaned up and can be called as a single function, you will
use the functions in the sqlrutils package to prepare the inputs and outputs in a form
that can be passed to the constructor that actually builds the stored procedure.

sqlrutils provides functions that define the input data schema and type, and define the
output data schema and type. It also includes functions that can convert R objects to the
required output type. You might make multiple function calls to create the required
objects, depending on the data types your code uses.

Inputs
If your function takes inputs, for each input, call the following functions:

setInputData if the input is a data frame


setInputParameter for all other input types

When you make each function call, an R object is created that you will later pass as an
argument to StoredProcedure , to create the complete stored procedure.

Outputs
sqlrutils provides multiple functions for converting R objects such as lists to the
data.frame required by SQL Server.
If your function outputs a data frame directly,
without first wrapping it into a list, you can skip this step.
You can also skip the
conversion this step if your function returns NULL.

When converting a list or getting a particular item from a list, choose from these
functions:

setOutputData if the variable to get from the list is a data frame

setOutputParameter for all other members of the list


When you make each function call, an R object is created that you will later pass as an
argument to StoredProcedure , to create the complete stored procedure.

Step 3. Generate the Stored Procedure


When all input and output parameters are ready, make a call to the StoredProcedure
constructor.

Usage

StoredProcedure (func, spName, ..., filePath = NULL ,dbName = NULL,

connectionString = NULL, batchSeparator = "GO")

To illustrate, assume that you want to create a stored procedure named sp_rsample with
these parameters:

Uses an existing function foosql. The function was based on existing code in R
function foo, but you rewrote the function to conform to the requirements as
described in this section, and named the updated function as foosql.
Uses the data frame queryinput as input
Generates as output a data frame with the R variable name, sqloutput
You want to create the T-SQL code as a file in the C:\Temp folder, so that you can
run it using SQL Server Management Studio later

StoredProcedure (foosql, sp_rsample, queryinput, sqloutput, filePath =


"C:\\Temp")

7 Note

Because you are writing the file to the file system, you can omit the arguments that
define the database connection.

The output of the function is a T-SQL stored procedure that can be executed on an
instance of SQL Server 2016 (requires R Services) or SQL Server 2017 (requires Machine
Learning Services with R).

For additional examples, see the package help, by calling help(StoredProcedure) from
an R environment.
Step 4. Register and Run the Stored Procedure
There are two ways that you can run the stored procedure:

Using T-SQL, from any client that supports connections to the SQL Server 2016 or
SQL Server 2017 instance
From an R environment

Both methods require that the stored procedure be registered in the database where
you intend to use the stored procedure.

Register the stored procedure


You can register the stored procedure using R, or you can run the CREATE PROCEDURE
statement in T-SQL.

Using T-SQL. If you are more comfortable with T-SQL, open SQL Server
Management Studio (or any other client that can run SQL DDL commands) and
execute the CREATE PROCEDURE statement using the code prepared by the
StoredProcedure function.

Using R. While you are still in your R environment, you can use the
registerStoredProcedure function in sqlrutils to register the stored procedure with

the database.

For example, you could register the stored procedure sp_rsample in the instance
and database defined in sqlConnStr, by making this R call:

registerStoredProcedure(sp_rsample, sqlConnStr)

) Important

Regardless of whether you use R or SQL, you must run the statement using an
account that has permissions to create new database objects.

Run using SQL


After the stored procedure has been created, open a connection to the SQL database
using any client that supports T-SQL, and pass values for any parameters required by
the stored procedure.

Run using R
Some additional preparation is needed if you want to execute the stored procedure
from R code, rather from SQL Server. For example, if the stored procedure requires input
values, you must set those input parameters before the function can be executed, and
then pass those objects to the stored procedure in your R code.

The overall process of calling the prepared SQL stored procedure is as follows:

1. Call getInputParameters to get a list of input parameter objects.


2. Define a $query or set a $value for each input parameter.
3. Use executeStoredProcedure to execute the stored procedure from the R
development environment, passing the list of input parameter objects that you set.

Example
This example shows the before and after versions of an R script that gets data from a
SQL Server database, performs some transformations on the data, and saves it to a
different database.

This simple example is used only to demonstrate how you might rearrange your R code
to make it easier to convert to a stored procedure.

Before code preparation


R

sqlConnFrom <- "Driver={ODBC Driver 13 for SQL


Server};Server=MyServer01;Database=AirlineSrc;Trusted_Connection=Yes;"

sqlConnTo <- "Driver={ODBC Driver 13 for SQL


Server};Server=MyServer01;Database=AirlineTest;Trusted_Connection=Yes;"

sqlQueryAirline <- "SELECT TOP 10000 ArrDelay, CRSDepTime, DayOfWeek FROM


[AirlineDemoSmall]"

dsSqlFrom <- RxSqlServerData(sqlQuery = sqlQueryAirline, connectionString =


sqlConnFrom)

dsSqlTo <- RxSqlServerData(table = "cleanData", connectionString =


sqlConnTo)

xFunc <- function(data) {

data$CRSDepHour <- as.integer(trunc(data$CRSDepTime))

return(data)

xVars <- c("CRSDepTime")

sqlCompute <- RxInSqlServer(numTasks = 4, connectionString = sqlConnTo)

rxOpen(dsSqlFrom)

rxOpen(dsSqlTo)

if (rxSqlServerTableExists("cleanData", connectionString = sqlConnTo)) {

rxSqlServerDropTable("cleanData")}

rxDataStep(inData = dsSqlFrom,

outFile = dsSqlTo,

transformFunc = xFunc,

transformVars = xVars,

overwrite = TRUE)

7 Note

When you use an ODBC connection rather than invoking the RxSqlServerData
function, you must open the connection using rxOpen before you can perform
operations on the database.

After code preparation


In the updated version, the first line defines the function name. All other code from the
original R solution becomes a part of that function.

myetl1function <- function() {

sqlConnFrom <- "Driver={ODBC Driver 13 for SQL


Server};Server=MyServer01;Database=Airline01;Trusted_Connection=Yes;"

sqlConnTo <- "Driver={ODBC Driver 13 for SQL


Server};Server=MyServer02;Database=Airline02;Trusted_Connection=Yes;"

sqlQueryAirline <- "SELECT TOP 10000 ArrDelay, CRSDepTime, DayOfWeek FROM


[AirlineDemoSmall]"

dsSqlFrom <- RxSqlServerData(sqlQuery = sqlQueryAirline, connectionString


= sqlConnFrom)

dsSqlTo <- RxSqlServerData(table = "cleanData", connectionString =


sqlConnTo)

xFunc <- function(data) {

data$CRSDepHour <- as.integer(trunc(data$CRSDepTime))

return(data)}

xVars <- c("CRSDepTime")

sqlCompute <- RxInSqlServer(numTasks = 4, connectionString = sqlConnTo)

if (rxSqlServerTableExists("cleanData", connectionString = sqlConnTo))


{rxSqlServerDropTable("cleanData")}

rxDataStep(inData = dsSqlFrom,

outFile = dsSqlTo,

transformFunc = xFunc,

transformVars = xVars,

overwrite = TRUE)

return(NULL)

7 Note

Although you do not need to open the ODBC connection explicitly as part of your
code, an ODBC connection is still required to use sqlrutils.

See also
sqlrutils reference
Native scoring using the PREDICT T-SQL
function with SQL machine learning
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Database
Azure SQL
Managed Instance
Azure Synapse Analytics

Learn how to use native scoring with the PREDICT T-SQL function to generate prediction
values for new data inputs in near-real-time. Native scoring requires that you have an
already-trained model.

The PREDICT function uses the native C++ extension capabilities in SQL machine
learning. This methodology offers the fastest possible processing speed of forecasting
and prediction workloads and support models in Open Neural Network Exchange
(ONNX) format or models trained using the RevoScaleR and revoscalepy packages.

How native scoring works


Native scoring uses libraries that can read models in ONNX or a predefined binary
format, and generate scores for new data inputs that you provide. Because the model is
trained, deployed, and stored, it can be used for scoring without having to call the R or
Python interpreter. This means that the overhead of multiple process interactions is
reduced, resulting in faster prediction performance.

To use native scoring, call the PREDICT T-SQL function and pass the following required
inputs:

A compatible model based on a supported model and algorithm.


Input data, typically defined as a T-SQL query.

The function returns predictions for the input data, together with any columns of source
data that you want to pass through.

Prerequisites
PREDICT is available on:

All editions of SQL Server 2017 and later on Windows and Linux
Azure SQL Managed Instance
Azure SQL Database
Azure SQL Edge
Azure Synapse Analytics

The function is enabled by default. You do not need to install R or Python, or enable
additional features.

Supported models
The model formats supported by the PREDICT function depends on the SQL platform on
which you perform native scoring. See the table below to see which model formats are
supported on which platform.

Platform ONNX model format RevoScale model format

SQL Server No Yes

Azure SQL Managed Instance Yes Yes

Azure SQL Database No Yes

Azure SQL Edge Yes No

Azure Synapse Analytics Yes No

ONNX models
The model must be in an Open Neural Network Exchange (ONNX) model format.

RevoScale models
The model must be trained in advance using one of the supported rx algorithms listed
below using the RevoScaleR or revoscalepy package.

Serialize the model using rxSerialize for R, and rx_serialize_model for Python. These
serialization functions have been optimized to support fast scoring.

Supported RevoScale algorithms

The following algorithms are supported in revoscalepy and RevoScaleR.

revoscalepy algorithms
rx_lin_mod
rx_logit
rx_btrees
rx_dtree
rx_dforest

RevoScaleR algorithms
rxLinMod
rxLogit
rxBTrees
rxDtree
rxDForest

If you need to use an algorithms from MicrosoftML or microsoftml, use real-time


scoring with sp_rxPredict.

Unsupported model types include the following types:

Models containing other transformations


Models using the rxGlm or rxNaiveBayes algorithms in RevoScaleR or revoscalepy
equivalents
PMML models
Models created using other open-source or third-party libraries

Examples

PREDICT with an ONNX model


This example shows how to use an ONNX model stored in the dbo.models table for
native scoring.

SQL

DECLARE @model VARBINARY(max) = (

SELECT DATA

FROM dbo.models

WHERE id = 1

);

WITH predict_input

AS (

SELECT TOP (1000) [id]

, CRIM

, ZN

, INDUS

, CHAS

, NOX

, RM

, AGE

, DIS

, RAD

, TAX

, PTRATIO

, B

, LSTAT

FROM [dbo].[features]

SELECT predict_input.id

, p.variable1 AS MEDV

FROM PREDICT(MODEL = @model, DATA = predict_input, RUNTIME=ONNX) WITH


(variable1 FLOAT) AS p;

7 Note

Because the columns and values returned by PREDICT can vary by model type, you
must define the schema of the returned data by using a WITH clause.

PREDICT with RevoScale model


In this example, you create a model using RevoScaleR in R, and then call the real-time
prediction function from T-SQL.

Step 1. Prepare and save the model


Run the following code to create the sample database and required tables.

SQL

CREATE DATABASE NativeScoringTest;

GO

USE NativeScoringTest;

GO

DROP TABLE IF EXISTS iris_rx_data;

GO

CREATE TABLE iris_rx_data (

"Sepal.Length" float not null, "Sepal.Width" float not null

, "Petal.Length" float not null, "Petal.Width" float not null

, "Species" varchar(100) null

);

GO

Use the following statement to populate the data table with data from the iris dataset.
SQL

INSERT INTO iris_rx_data ("Sepal.Length", "Sepal.Width", "Petal.Length",


"Petal.Width" , "Species")

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'iris_data <- iris;'

, @input_data_1 = N''

, @output_data_1_name = N'iris_data';

GO

Now, create a table for storing models.

SQL

DROP TABLE IF EXISTS ml_models;

GO

CREATE TABLE ml_models ( model_name nvarchar(100) not null primary key

, model_version nvarchar(100) not null

, native_model_object varbinary(max) not null);

GO

The following code creates a model based on the iris dataset and saves it to the table
named models.

SQL

DECLARE @model varbinary(max);

EXECUTE sp_execute_external_script

@language = N'R'

, @script = N'

iris.sub <- c(sample(1:50, 25), sample(51:100, 25), sample(101:150, 25))

iris.dtree <- rxDTree(Species ~ Sepal.Length + Sepal.Width +


Petal.Length + Petal.Width, data = iris[iris.sub, ])

model <- rxSerializeModel(iris.dtree, realtimeScoringOnly = TRUE)

'

, @params = N'@model varbinary(max) OUTPUT'

, @model = @model OUTPUT

INSERT [dbo].[ml_models]([model_name], [model_version],


[native_model_object])

VALUES('iris.dtree','v1', @model) ;

7 Note

Be sure to use the rxSerializeModel function from RevoScaleR to save the model.
The standard R serialize function cannot generate the required format.
You can run a statement such as the following to view the stored model in binary
format:

SQL

SELECT *, datalength(native_model_object)/1024. as model_size_kb

FROM ml_models;

Step 2. Run PREDICT on the model

The following simple PREDICT statement gets a classification from the decision tree
model using the native scoring function. It predicts the iris species based on attributes
you provide, petal length and width.

SQL

DECLARE @model varbinary(max) = (

SELECT native_model_object

FROM ml_models

WHERE model_name = 'iris.dtree'

AND model_version = 'v1');

SELECT d.*, p.*

FROM PREDICT(MODEL = @model, DATA = dbo.iris_rx_data as d)

WITH(setosa_Pred float, versicolor_Pred float, virginica_Pred float) as p;

go

If you get the error, "Error occurred during execution of the function PREDICT. Model is
corrupt or invalid", it usually means that your query didn't return a model. Check
whether you typed the model name correctly, or if the models table is empty.

7 Note

Because the columns and values returned by PREDICT can vary by model type, you
must define the schema of the returned data by using a WITH clause.

Next steps
PREDICT T-SQL function
SQL machine learning documentation
Machine learning and AI with ONNX in SQL Edge
Deploy and make predictions with an ONNX model in Azure SQL Edge
Score machine learning models with PREDICT in Azure Synapse Analytics
Get Python package information
Article • 02/28/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

This article describes how to get information about installed Python packages, including
versions and installation locations, on Azure SQL Managed Instance Machine Learning
Services. Example Python scripts show you how to list package information such as
installation path and version.

Default Python library location


When you install machine learning with SQL Server, a single package library is created at
the instance level for each language that you install. The instance library is a secured
folder registered with SQL Server.

All script or code that runs in-database on SQL Server must load functions from the
instance library. SQL Server can't access packages installed to other libraries. This applies
to remote clients as well: any Python code running in the server compute context can
only use packages installed in the instance library.
To protect server assets, the default
instance library can be modified only by a computer administrator.

Enable external scripts by running the following SQL commands:

SQL

sp_configure 'external scripts enabled', 1;

RECONFIGURE WITH override;

) Important

On Azure SQL Managed Instance, running the sp_configure and RECONFIGURE


commands triggers a SQL server restart for the RG settings to take effect. This can
cause a few seconds of unavailability.

Run the following SQL statement if you want to verify the default library for the current
instance. This example returns the list of folders included in the Python sys.path
variable. The list includes the current directory and the standard library path.

SQL
EXECUTE sp_execute_external_script

@language =N'Python',
@script=N'import sys; print("\n".join(sys.path))'

For more information about the variable sys.path and how it's used to set the
interpreter's search path for modules, see The Module Search Path .

7 Note

Don't try to install Python packages directly in the SQL package library using pip or
similar methods. Instead, use sqlmlutils to install packages in a SQL instance. For
more information, see Install Python packages with sqlmlutils.

Default Microsoft Python packages


The following Microsoft Python packages are installed with SQL Server Machine
Learning Services when you select the Python feature during setup.

Packages Version Description

revoscalepy 9.4.7 Used for remote compute contexts, streaming, parallel execution of rx
functions for data import and transformation, modeling, visualization,
and analysis.

microsoftml 9.4.7 Adds machine learning algorithms in Python.

For information on which version of Python is included, see Python and R versions.

Component upgrades
By default, Python packages are refreshed through service packs and cumulative
updates. Additional packages and full version upgrades of core Python components are
possible only through product upgrades.

Default open-source Python packages


When you select the Python language option during setup, Anaconda 4.2 distribution
(over Python 3.5) is installed. In addition to Python code libraries, the standard
installation includes sample data, unit tests, and sample scripts.
) Important

You should never manually overwrite the version of Python installed by SQL Server
Setup with newer versions on the web. Microsoft Python packages are based on
specific versions of Anaconda. Modifying your installation could destabilize it.

List all installed Python packages


The following example script displays a list of all Python packages installed in the SQL
Server instance.

SQL

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

import pkg_resources

import pandas

OutputDataSet = pandas.DataFrame(sorted([(i.key, i.version) for i in


pkg_resources.working_set]))'

WITH result sets((Package NVARCHAR(128), Version NVARCHAR(128)));

Find a single Python package


If you've installed a Python package and want to make sure that it's available to a
particular SQL Server instance, you can execute a stored procedure to look for the
package and return messages.

For example, the following code looks for the scikit-learn package.
If the package is
found, the code prints the package version.

SQL

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

import pkg_resources

pkg_name = "scikit-learn"

try:

version = pkg_resources.get_distribution(pkg_name).version

print("Package " + pkg_name + " is version " + version)

except:

print("Package " + pkg_name + " not found")

'

Result:

text

STDOUT message(s) from external script: Package scikit-learn is version


0.20.2

View the version of Python


The following example code returns the version of Python installed in the instance of
SQL Server.

SQL

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

import sys

print(sys.version)

'

Next steps
Install new Python packages with sqlmlutils
Install Python packages with sqlmlutils
Article • 02/28/2023

Applies to:
SQL Server 2019 (15.x)
Azure SQL Managed Instance

This article describes how to use functions in the sqlmlutils package to install new
Python packages to an instance of Azure SQL Managed Instance Machine Learning
Services. The packages you install can be used in Python scripts running in-database
using the sp_execute_external_script T-SQL statement.

7 Note

You cannot update or uninstall packages that have been preinstalled on an instance
of SQL Managed Instance Machine Learning Services. To view a list of packages
currently installed, see List all installed Python packages.

For more information about package location and installation paths, see Get Python
package information.

Prerequisites
Install Azure Data Studio on the client computer you use to connect to SQL Server.
You can use other database management or query tools, but this article assumes
Azure Data Studio.

Install the Python kernel in Azure Data Studio. You can also install and use Python
from the command line, and you can use an alternative Python development
environment such as Visual Studio Code with the Python Extension .

The version of Python on the client computer must match the version of Python on
the server, and packages you install must be compliant with the version of Python
you have.
For information on which version of Python is included with each SQL
Server version, see Python and R versions.

To verify the version of Python on a particular SQL Server instance, use the
following T-SQL command.

SQL

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

import sys

print(sys.version)

'

Other considerations
The Python package library is located in the Program Files folder of your SQL
Server instance and, by default, installing in this folder requires administrator
permissions. For more information, see Package library location.

Package installation is specific to the SQL instance, database, and user you specify
in the connection information you provide to sqlmlutils. To use the package in
multiple SQL instances or databases, or for different users, you'll need to install the
package for each one. The exception is that if the package is installed by a member
of dbo , the package is public and is shared with all users. If a user installs a newer
version of a public package, the public package is not affected but that user will
have access to the newer version.

Before adding a package, consider whether the package is a good fit for the SQL
Server environment.

We recommend that you use Python in-database for tasks that benefit from
tight integration with the database engine, such as machine learning, rather
than tasks that simply query the database.

If you add packages that put too much computational pressure on the server,
performance will suffer.

On a hardened SQL Server environment, you might want to avoid the following:
Packages that require network access
Packages that require elevated file system access
Packages used for web development or other tasks that don't benefit by
running inside SQL Server

The Python package tensorflow cannot be installed using sqlmlutils. For more
information and a workaround, see Known issues in SQL Server Machine
Learning Services.

Install sqlmlutils on the client computer


To use sqlmlutils, you first need to install it on the client computer that you use to
connect to SQL Server.
In Azure Data Studio
If you'll be using sqlmlutils in Azure Data Studio, you can install it using the Manage
Packages feature in a Python kernel notebook.

1. In a Python kernel notebook in Azure Data Studio, click Manage Packages.


2. Click Add new.
3. Enter "sqlmlutils" in the Search Pip packages field and click Search.
4. Select the Package Version you want to install (the latest version is recommended).
5. Click Install and then Close.

From Python command line


If you'll be using sqlmlutils from a Python command prompt or IDE, you can install
sqlmlutils with a simple pip command:

Console

pip install sqlmlutils

You can also install sqlmlutils from a zip file:

1. Make sure you have pip installed. See pip installation for more information.
2. Download the latest sqlmlutils zip file from
https://github.com/microsoft/sqlmlutils/tree/master/R/dist to the client
computer. Don't unzip the file.
3. Open a Command Prompt and run the following commands to install the
sqlmlutils package. Substitute the full path to the sqlmlutils zip file you
downloaded - this example assumes the downloaded file is c:\temp\sqlmlutils-
1.0.0.zip .

Console

pip install --upgrade --upgrade-strategy only-if-needed


c:\temp\sqlmlutils-1.0.0.zip

Add a Python package on SQL Server


Using sqlmlutils, you can add Python packages to a SQL instance. You can then use
those packages in your Python code running in the SQL instance. sqlmlutils uses
CREATE EXTERNAL LIBRARY to install the package and each of its dependencies.
In the following example, you'll add the text-tools package to SQL Server.

Add the package online


If the client computer you use to connect to SQL Server has Internet access, you can use
sqlmlutils to find the text-tools package and any dependencies over the Internet, and
then install the package to a SQL Server instance remotely.

1. On the client computer, open Python or a Python environment.

2. Use the following commands to install the text-tools package. Substitute your own
SQL Server database connection information.

Python

import sqlmlutils

connection = sqlmlutils.ConnectionInfo(server="server", database="database",


uid="username", pwd="password")

sqlmlutils.SQLPackageManager(connection).install("text-tools")

Add the package offline


If the client computer you use to connect to SQL Server doesn't have an Internet
connection, you can use pip on a computer with Internet access to download the
package and any dependent packages to a local folder. You then copy the folder to the
client computer where you can install the package offline.

On a computer with Internet access

1. Open a Command Prompt and run the following command to create a local folder
that contains the text-tools package. This example creates the folder
c:\temp\text-tools .

Console

pip download text-tools -d c:\temp\text-tools

2. Copy the text-tools folder to the client computer. The following example
assumes you copied it to c:\temp\packages\text-tools .

On the client computer


Use sqlmlutils to install each package (WHL file) you find in the local folder that pip
created. It doesn't matter in what order you install the packages.

In this example, text-tools has no dependencies, so there is only one file from the text-
tools folder for you to install. In contrast, a package such as scikit-plot has 11
dependencies, so you would find 12 files in the folder (the scikit-plot package and the
11 dependent packages), and you would install each of them.

Run the following Python script. Substitute the actual file path and name of the package,
and your own SQL Server database connection information. Repeat the
sqlmlutils.SQLPackageManager statement for each package file in the folder.

Python

import sqlmlutils

connection = sqlmlutils.ConnectionInfo(server="yourserver",
database="yourdatabase", uid="username", pwd="password"))

sqlmlutils.SQLPackageManager(connection).install("text_tools-1.0.0-py3-none-
any.whl")

Use the package


You can now use the package in a Python script in SQL Server. For example:

SQL

EXECUTE sp_execute_external_script

@language = N'Python',

@script = N'

from text_tools.finders import find_best_string

corpus = "Lorem Ipsum text"

query = "Ipsum"

first_match = find_best_string(query, corpus)

print(first_match)

'

Remove the package from SQL Server


If you would like to remove the text-tools package, use the following Python command
on the client computer, using the same connection variable you defined earlier.

Python

sqlmlutils.SQLPackageManager(connection).uninstall("text-tools")

More sqlmlutils functions


The sqlmlutils package contains a number of functions for managing Python packages,
and for creating, managing, and running stored procedures and queries in a SQL Server.
For details, see the sqlmlutils Python README file .

For information about any sqlmlutils function, use the Python help function. For
example:

Python

import sqlmlutils

help(SQLPackageManager.install)

Next steps
For information about Python packages installed in SQL Server Machine Learning
Services, see Get Python package information.

For information about installing R packages in SQL Server Machine Learning


Services, see Install new R packages on SQL Server.
Get R package information
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

This article describes how to get information about installed R packages on Azure SQL
Managed Instance Machine Learning Services. Example R scripts show you how to list
package information such as installation path and version.

7 Note

Feature capabilities and installation options vary between versions of SQL Server.
Use the version selector dropdown to choose the appropriate version of SQL
Server.

Default R library location


When you install machine learning with SQL Server, a single package library is created at
the instance level for each language that you install. On Windows, the instance library is
a secured folder registered with SQL Server.

All script that runs in-database on SQL Server must load functions from the instance
library. SQL Server can't access packages installed to other libraries. This applies to
remote clients as well: any R script running in the server compute context can only use
packages installed in the instance library.
To protect server assets, the default instance
library can be modified only by a computer administrator.

Run the following statement to verify the default R package library for the current
instance:

SQL

EXECUTE sp_execute_external_script

@language = N'R',

@script = N'OutputDataSet <- data.frame(.libPaths());'

WITH RESULT SETS (([DefaultLibraryName] VARCHAR(MAX) NOT NULL));

GO

Default Microsoft R packages


The following Microsoft R packages are installed with SQL Server Machine Learning
Services when you select the R feature during setup.

Packages Version Description

RevoScaleR 9.4.7 Used for remote compute contexts, streaming, parallel execution of rx
functions for data import and transformation, modeling, visualization,
and analysis.

sqlrutils 1.0.0 Used for including R script in stored procedures.

MicrosoftML 9.4.7 Adds machine learning algorithms in R.

olapR 1.0.0 Used for writing MDX statements in R.

Component upgrades
By default, R packages are refreshed through service packs and cumulative updates.
Additional packages and full version upgrades of core R components are possible only
through product upgrades.

Default open-source R packages


R support includes open-source R so that you can call base R functions and install
additional open-source and third-party packages. R language support includes core
functionality such as base, stats, utils, and others. A base installation of R also includes
numerous sample datasets and standard R tools such as RGui (a lightweight interactive
editor) and RTerm (an R command prompt).

The distribution of open-source R included in your installation is Microsoft R Open


(MRO) . MRO adds value to base R by including additional open-source packages such
as the Intel Math Kernel Library .

For information on which version of R is included with each SQL Server version, see
Python and R versions.

) Important

You should never manually overwrite the version of R installed by SQL Server Setup
with newer versions on the web. Microsoft R packages are based on specific
versions of R. Modifying your installation could destabilize it.
List all installed R packages
The following example uses the R function installed.packages() in a Transact-SQL
stored procedure to display a list of R packages that have been installed in the
R_SERVICES library for the current SQL instance. This script returns package name and
version fields in the DESCRIPTION file.

SQL

EXECUTE sp_execute_external_script

@language=N'R',

@script = N'str(OutputDataSet);

packagematrix <- installed.packages();

Name <- packagematrix[,1];

Version <- packagematrix[,3];

OutputDataSet <- data.frame(Name, Version);',

@input_data_1 = N'

'

WITH RESULT SETS ((PackageName nvarchar(250), PackageVersion nvarchar(max)


))

For more information about the optional and default fields for the R package
DESCRIPTION field, see
https://cran.r-project.org .

Find a single R package


If you've installed an R package and want to make sure that it's available to a particular
SQL Server instance, you can execute a stored procedure to load the package and return
messages.

For example, the following statement looks for and loads the glue package, if
available.
If the package cannot be located or loaded, you get an error.

SQL

EXECUTE sp_execute_external_script

@language =N'R',

@script=N'

require("glue")

'

To see more information about the package, view the packageDescription .


The following
statement returns information for the MicrosoftML package.

SQL
EXECUTE sp_execute_external_script

@language = N'R',

@script = N'

print(packageDescription("MicrosoftML"))

'

Next steps
Install new R packages with sqlmlutils
Install R packages with sqlmlutils
Article • 03/03/2023

Applies to:
SQL Server 2019 (15.x)
Azure SQL Managed Instance

This article describes how to use functions in the sqlmlutils package to install R
packages to an instance of Azure SQL Managed Instance Machine Learning Services. The
packages you install can be used in R scripts running in-database using the
sp_execute_external_script T-SQL statement.

7 Note

You cannot update or uninstall packages that have been preinstalled on an instance
of SQL Managed Instance Machine Learning Services. To view a list of packages
currently installed, see List all installed R packages.

Prerequisites
Install R and RStudio Desktop on the client computer you use to connect to
SQL Server. You can use any R IDE for running scripts, but this article assumes
RStudio.

The version of R on the client computer must match the version of R on the server,
and packages you install must be compliant with the version of R you have.
For
information on which version of R is included with each SQL Server version, see
Python and R versions.

To verify the version of R on a particular SQL Server, use the following T-SQL
command.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'print(R.version)'

Install Azure Data Studio on the client computer you use to connect to SQL Server.
You can use other database management or query tools, but this article assumes
Azure Data Studio.

Other considerations
Package installation is specific to the SQL instance, database, and user you specify
in the connection information you provide to sqlmlutils. To use the package in
multiple SQL instances or databases, or for different users, you'll need to install the
package for each one. The exception is that if the package is installed by a member
of dbo , the package is public and is shared with all users. If a user installs a newer
version of a public package, the public package is not affected but that user will
have access to the newer version.

R script running in SQL Server can use only packages installed in the default
instance library. SQL Server cannot load packages from external libraries, even if
that library is on the same computer. This includes R libraries installed with other
Microsoft products.

On a hardened SQL Server environment, you might want to avoid the following:
Packages that require network access
Packages that require elevated file system access
Packages used for web development or other tasks that don't benefit by
running inside SQL Server

Install sqlmlutils on the client computer


To use sqlmlutils, you first need to install it on the client computer you use to connect
to SQL Server.

The sqlmlutils package depends on the odbc package, and odbc depends on a number
of other packages. The following procedures install all of these packages in the correct
order.

Install sqlmlutils online


If the client computer has Internet access, you can download and install sqlmlutils and
its dependent packages online.

1. Download the latest sqlmlutils file ( .zip for Windows, .tar.gz for Linux) from
https://github.com/microsoft/sqlmlutils/releases to the client computer. Don't
expand the file.

2. Open a Command Prompt and run the following commands to install the
packages odbc and sqlmlutils. Substitute the path to the sqlmlutils file you
downloaded. The odbc package is found online and installed.

Console
R.exe -e "install.packages('odbc', type='binary')"

R.exe CMD INSTALL sqlmlutils_1.0.0.zip

Install sqlmlutils offline


If the client computer doesn't have an Internet connection, you need to download the
odbc and sqlmlutils packages in advance using a computer that does have Internet
access. You then can copy the files to a folder on the client computer and install the
packages offline.

The odbc package has a number of dependent packages, and identifying all
dependencies for a package gets complicated. We recommend that you use
miniCRAN to create a local repository folder for the package that includes all the
dependent packages.
For more information, see Create a local R package repository
using miniCRAN.

The sqlmlutils package consists of a single file that you can copy to the client computer
and install.

On a computer with Internet access:

1. Install miniCRAN. See Install miniCRAN for details.

2. In RStudio, run the following R script to create a local repository of the package
odbc. This example assumes the repository will be created in the folder odbc .

library("miniCRAN")

CRAN_mirror <- c(CRAN = "https://cran.microsoft.com")

local_repo <- "odbc"

pkgs_needed <- "odbc"

pkgs_expanded <- pkgDep(pkgs_needed, repos = CRAN_mirror);

makeRepo(pkgs_expanded, path = local_repo, repos = CRAN_mirror, type =


"win.binary", Rversion = "3.5");

For the Rversion value, use the version of R installed on SQL Server. To verify the
installed version, use the following T-SQL command.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'print(R.version)'

3. Download the latest sqlmlutils file ( .zip for Windows, .tar.gz for Linux) from
https://github.com/microsoft/sqlmlutils/releases . Don't expand the file.

4. Copy the entire odbc repository folder and the sqlmlutils file to the client
computer.

On the client computer you use to connect to SQL Server:

1. Open a command prompt.

2. Run the following commands to install odbc and then sqlmlutils. Substitute the full
paths to the odbc repository folder and the sqlmlutils file you copied to this
computer.

Console

R.exe -e "install.packages('odbc', repos='odbc')"

R.exe CMD INSTALL sqlmlutils_1.0.0.zip

Add an R package on SQL Server


In the following example, you'll add the glue package to SQL Server.

Add the package online


If the client computer you use to connect to SQL Server has Internet access, you can use
sqlmlutils to find the glue package and any dependencies over the Internet, and then
install the package to a SQL Server instance remotely.

1. On the client computer, open RStudio and create a new R Script file.

2. Use the following R script to install the glue package using sqlmlutils. Substitute
your own SQL Server database connection information.

library(sqlmlutils)

connection <- connectionInfo(

server = "server",

database = "database",

uid = "username",

pwd = "password")

sql_install.packages(connectionString = connection, pkgs = "glue",


verbose = TRUE, scope = "PUBLIC")

 Tip

The scope can be either PUBLIC or PRIVATE. Public scope is useful for the
database administrator to install packages that all users can use. Private scope
makes the package available only to the user who installs it. If you don't
specify the scope, the default scope is PRIVATE.

Add the package offline


If the client computer doesn't have an Internet connection, you can use miniCRAN to
download the glue package using a computer that does have Internet access. You then
copy the package to the client computer where you can install the package offline.
See
Install miniCRAN for information on installing miniCRAN.

On a computer with Internet access:

1. Run the following R script to create a local repository for glue. This example
creates the repository folder in c:\downloads\glue .

library("miniCRAN")

CRAN_mirror <- c(CRAN = "https://cran.microsoft.com")

local_repo <- "c:/downloads/glue"

pkgs_needed <- "glue"

pkgs_expanded <- pkgDep(pkgs_needed, repos = CRAN_mirror);

makeRepo(pkgs_expanded, path = local_repo, repos = CRAN_mirror, type =


"win.binary", Rversion = "3.5");

For the Rversion value, use the version of R installed on SQL Server. To verify the
installed version, use the following T-SQL command.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'print(R.version)'

2. Copy the entire glue repository folder ( c:\downloads\glue ) to the client computer.
For example, copy it to the folder c:\temp\packages\glue .

On the client computer:


1. Open RStudio and create a new R Script file.

2. Use the following R script to install the glue package using sqlmlutils. Substitute
your own SQL Server database connection information (if you don't use Windows
Authentication, add uid and pwd parameters).

library(sqlmlutils)

connection <- connectionInfo(

server= "yourserver",

database = "yourdatabase")

localRepo = "c:/temp/packages/glue"

sql_install.packages(connectionString = connection, pkgs = "glue",


verbose = TRUE, scope = "PUBLIC", repos=paste0("file:///",localRepo))

 Tip

The scope can be either PUBLIC or PRIVATE. Public scope is useful for the
database administrator to install packages that all users can use. Private scope
makes the package available only to the user who installs it. If you don't
specify the scope, the default scope is PRIVATE.

Use the package


Once the glue package is installed, you can use it in an R script in SQL Server with the T-
SQL sp_execute_external_script command.

1. Open Azure Data Studio and connect to your SQL Server database.

2. Run the following command:

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

library(glue)

name <- "Fred"

birthday <- as.Date("2020-06-14")


text <- glue(''My name is {name} '',

''and my birthday is {format(birthday, "%A, %B %d, %Y")}.'')

print(text)

';

Results

text

My name is Fred and my birthday is Sunday, June 14, 2020.

Remove the package


If you would like to remove the glue package, run the following R script. Use the same
connection variable you defined earlier.

sql_remove.packages(connectionString = connection, pkgs = "glue", scope =


"PUBLIC")

More sqlmlutils functions


The sqlmlutils package contains a number of functions for managing R packages, and
for creating, managing, and running stored procedures and queries in a SQL Server. For
details, see the sqlmlutils R README file .

For information about any sqlmlutils function, use the R help function or ? operator. For
example:

library(sqlmlutils)

help("sql_install.packages")

Next steps
For information about installed R packages, see Get R package information
For help in working with R packages, see Tips for using R packages
For information about installing Python packages, see Install Python packages with
pip
For more information about SQL Server Machine Learning Services, see What is
SQL Server Machine Learning Services (Python and R)?
Create a local R package repository
using miniCRAN
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

This article describes how to install R packages offline by using miniCRAN to create a
local repository of packages and dependencies. miniCRAN identifies and downloads
packages and dependencies into a single folder that you copy to other computers for
offline R package installation.

You can specify one or more packages, and miniCRAN recursively reads the dependency
tree for these packages. It then downloads only the listed packages and their
dependencies from CRAN or similar repositories.

When it's done, miniCRAN creates an internally consistent repository consisting of the
selected packages and all required dependencies. You can move this local repository to
the server, and proceed to install the packages without an internet connection.

Experienced R users often look for the list of dependent packages in the DESCRIPTION
file of a downloaded package. However, packages listed in Imports might have second-
level dependencies. For this reason, we recommend miniCRAN for assembling the full
collection of required packages.

Why create a local repository


The goal of creating a local package repository is to provide a single location that a
server administrator or other users in the organization can use to install new R packages
on a server, especially one that does not have internet access. After creating the
repository, you can modify it by adding new packages or upgrading the version of
existing packages.

Package repositories are useful in these scenarios:

Security: Many R users are accustomed to downloading and installing new R


packages at will, from CRAN or one of its mirror sites. However, for security
reasons, production servers running SQL Server typically do not have internet
connectivity.

Easier offline installation: To install a package to an offline server requires that you
also download all package dependencies. Using miniCRAN makes it easier to get
all dependencies in the correct format and avoid dependency errors.

Improved version management: In a multi-user environment, there are good


reasons to avoid unrestricted installation of multiple package versions on the
server. Use a local repository to provide a consistent set of packages for your users.

Install miniCRAN
The miniCRAN package itself is dependent on 18 other CRAN packages, among which is
the RCurl package, which has a system dependency on the curl-devel package. Similarly,
package XML has a dependency on libxml2-devel. To resolve dependencies, we
recommend that you build your local repository initially on a machine with full internet
access.

Run the following commands on a computer with a base R, R tools, and internet
connection. It's assumed that this is not your SQL Server computer. The following
commands install the miniCRAN package and the igraph package. This example checks
whether the package is already installed, but you can bypass the if statements and
install the packages directly.

if(!require("miniCRAN")) install.packages("miniCRAN")

if(!require("igraph")) install.packages("igraph")

library("miniCRAN")

Set the CRAN mirror and MRAN snapshot


Specify a mirror site to use in getting packages. For example, you could use the MRAN
site, or any other site in your region that contains the packages you need. If a download
fails, try another mirror site.

CRAN_mirror <- c(CRAN = "https://cran.cnr.berkeley.edu")

Create a local folder


Create a local folder in which to store the collected packages. If you repeat this often,
you might want to use a descriptive name, such as "miniCRANZooPackages" or
"miniCRANMyRPackageV2".
Specify the folder as the local repo. R syntax uses a forward slash for path names, which
is opposite from Windows conventions.

local_repo <- "C:/miniCRANZooPackages"

Add packages to the local repo


After miniCRAN is installed and loaded, create a list that specifies the additional
packages you want to download.

Do not add dependencies to this initial list. The igraph package used by miniCRAN
generates the list of dependencies automatically. For more information about how to
use the generated dependency graph, see Using miniCRAN to identify package
dependencies .

1. Add target packages "zoo" and "forecast" to a variable.

pkgs_needed <- c("zoo", "forecast")

2. Optionally, plot the dependency graph. This is not necessary, but it can be
informative.

plot(makeDepGraph(pkgs_needed))

3. Create the local repo. Be sure to change the R version, if necessary, to the version
installed on your SQL Server instance. If you did a component upgrade, your
version might be newer than the original version. For more information, see Get R
package information.

pkgs_expanded <- pkgDep(pkgs_needed, repos = CRAN_mirror);

makeRepo(pkgs_expanded, path = local_repo, repos = CRAN_mirror, type =


"win.binary", Rversion = "3.3");

From this information, the miniCRAN package creates the folder structure that you
need to copy the packages to the SQL Server later.
At this point you should have a folder containing the packages you need and any
additional packages that are required. The folder should contain a collection of zipped
packages. Do not unzip the packages or rename any files.

Optionally, run the following code to list the packages contained in the local miniCRAN
repository.

pdb <- as.data.frame(pkgAvail(local_repo, type = "win.binary", Rversion =


"3.3"), stringsAsFactors = FALSE);

head(pdb);

pdb$Package;

pdb[, c("Package", "Version", "License")]

Add packages to the instance library


After you have a local repository with the packages you need, move the package
repository to the SQL Server computer. The following procedure describes how to install
the packages using R tools.

7 Note

The recommended method for installing packages is using sqlmlutils. See Install
new R packages with sqlmlutils.

1. Copy the folder containing the miniCRAN repository, in its entirety, to the server
where you plan to install the packages. The folder typically has this structure:

<miniCRAN root>/bin/windows/contrib/version/<all packages>

In this procedure, we assume a folder off the root drive.

2. Open an R tool associated with the instance (for example, you could use Rgui.exe).
Right-click and select Run as administrator to allow the tool to make updates to
your system.

3. Get the path for the instance library, and add it to the list of library paths.

4. Specify the new location on the server where you copied the miniCRAN repository
as server_repo .
In this example, we assume that you copied the repository to a temporary folder
on the server.

inputlib <- "C:/miniCRANZooPackages"

5. Since you're working in a new R workspace on the server, you must also furnish the
list of packages to install.

mypackages <- c("zoo", "forecast")

6. Install the packages, providing the path to the local copy of the miniCRAN repo.

install.packages(mypackages, repos = file.path("file://",


normalizePath(inputlib, winslash = "/")), lib = outputlib, type =
"win.binary", dependencies = TRUE);

7. From the instance library, you can view the installed packages using a command
like the following:

installed.packages()

See also
Get R package information
R tutorials
Tips for using R packages
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

This article provides helpful tips on using R packages in SQL Server. These tips are for
DBAs who are unfamiliar with R, and experienced R developers who are unfamiliar with
package access in a SQL Server instance.

If you're new to R
As an administrator installing R packages for the first time, knowing a few basics about
R package management can help you get started.

Package dependencies
R packages frequently depend on multiple other packages, some of which might not be
available in the default R library used by the instance. Sometimes a package requires a
different version of a dependent package than what's already installed. Package
dependencies are noted in a DESCRIPTION file embedded in the package, but are
sometimes incomplete. You can use a package called iGraph to fully articulate the
dependency graph.

If you need to install multiple packages, or want to ensure that everyone in your
organization gets the correct package type and version, we recommend that you use
the miniCRAN package to analyze the complete dependency chain. minicRAN creates
a local repository that can be shared among multiple users or computers. For more
information, see Create a local package repository using miniCRAN.

Package sources, versions, and formats


There are multiple sources for R packages, such as CRAN and Bioconductor . The
official site for the R language (https://www.r-project.org/ ) lists many of these
resources. Microsoft offers MRAN for its distribution of open-source R (MRO ) and
other packages. Many packages are published to GitHub, where developers can obtain
the source code.

Know which library you're installing to and which


packages are already installed
If you have previously modified the R environment on the computer, before installing
anything ensure that the R environment variable .libPath uses just one path.

This path should point to the R_SERVICES folder for the instance. For more information,
including how to determine which packages are already installed, see Get R package
information.

If you're new to SQL Server


As an R developer working on code executing on SQL Server, the security policies
protecting the server constrain your ability to control the R environment. The following
tips describe typical situations and provide suggestions for working in this environment.

R user libraries: not supported on SQL Server


R developers who need to install new R packages are accustomed to installing packages
at will, using a private, user library whenever the default library is not available, or when
the developer is not an administrator on the computer. For example, in a typical R
development environment, the user would add the location of the package to the R
environment variable libPath , or reference the full package path, like this:

library("c:/Users/<username>/R/win-library/packagename")

This does not work when running R solutions in SQL Server, because R packages must
be installed to a specific default library that is associated with the instance. When a
package is not available in the default library, you get this error when you try to call the
package:

Error in library(xxx) : there is no package called 'package-name'

For information on how to install R packages in SQL Server, see Install new R packages
on SQL Server Machine Learning Services or SQL Server R Services.

How to avoid "package not found" errors


Using the following guidelines will help you avoid "package not found" errors.

Eliminate dependencies on user libraries.


It's a bad development practice to install required R packages to a custom user
library. This can lead to errors if a solution is run by another user who does not
have access to the library location.

Also, if a package is installed in the default library, the R runtime loads the package
from the default library, even if you specify a different version in the R code.

Make sure your code is able to run in a shared environment.

Avoid installing packages as part of a solution. If you don't have permissions to


install packages, the code will fail. Even if you do have permissions to install
packages, you should do so separately from other code that you want to execute.

Check your code to make sure that there are no calls to uninstalled packages.

Update your code to remove direct references to the paths of R packages or R


libraries.

Know which package library is associated with the instance. For more information,
see Get R package information.

See also
Install new R packages with sqlmlutils
Monitor Python and R script execution
using custom reports in SQL Server
Management Studio
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

Use custom reports in SQL Server Management Studio (SSMS) to monitor the execution
of external scripts (Python and R), resources used, diagnose problems, and tune
performance in SQL Server Machine Learning Services.

In these reports, you can view details such as:

Active Python or R sessions


Configuration settings for the instance
Execution statistics for machine learning jobs
Extended events for R Services
Python or R packages installed on the current instance

This article explains how to install and use the custom reports provided for SQL Server
Machine Learning Services.

For more information on reports in SQL Server Management Studio, see Custom reports
in Management Studio.

How to install the reports


The reports are designed using SQL Server Reporting Services, but can be used directly
from SQL Server Management Studio. Reporting Services does not have to be installed
on your SQL Server instance.

To use these reports, follow these steps:

1. Download the SSMS Custom Reports for SQL Server Machine Learning Services
from GitHub.

7 Note

The custom report ML Services - Configure Instance is not supported on


Azure SQL Managed Instance.
2. Copy the reports to Management Studio

a. Locate the custom reports folder used by SQL Server Management Studio. By
default, custom reports are stored in this folder (where user_name is your
Windows user name):

C:\Users\user_name\Documents\SQL Server Management Studio\Custom Reports

You can also specify a different folder, or create subfolders.

b. Copy the *.RDL files you downloaded to the custom reports folder.

3. Run the reports in Management Studio

a. In Management Studio, right-click the Databases node for the instance where
you want to run the reports.

b. Click Reports, and then click Custom Reports.

c. In the Open File dialog box, locate the custom reports folder.

d. Select one of the RDL files you downloaded, and then click Open.

Reports
The SSMS Custom Reports repository in GitHub includes the following reports:

Report Description

Active Users who are currently connected to the SQL Server instance and running a
Sessions Python or R script.

Configuration Installation settings of Machine Learning Services and properties of the Python or
R runtime.

Configure Configure Machine Learning Services.


Instance

Execution Execution statistics of Machine Learning services. For example, you can get the
Statistics total number of external scripts executions and number of parallel executions.

Extended Extended events that are available to get more insights into external scripts
Events execution.

Packages List the R or Python packages installed on the SQL Server instance and their
properties, such as version and name.
Report Description

Resource View the CPU, Memory, IO consumption of SQL Server, and external scripts
Usage execution. You can also view the memory setting for external resource pools.

Next steps
Monitor SQL Server Machine Learning Services using dynamic management views
(DMVs)
Monitor Python and R scripts with extended events in SQL Server Machine
Learning Services
Monitor SQL Server Machine Learning
Services using dynamic management
views (DMVs)
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

Use dynamic management views (DMVs) to monitor the execution of external scripts
(Python and R), resources used, diagnose problems, and tune performance in SQL Server
Machine Learning Services.

In this article, you will find the DMVs that are specific for SQL Server Machine Learning
Services. You will also find example queries that show:

Settings and configuration options for machine learning


Active sessions running external Python or R scripts
Execution statistics for the external runtime for Python and R
Performance counters for external scripts
Memory usage for the OS, SQL Server, and external resource pools
Memory configuration for SQL Server and external resource pools
Resource Governor resource pools, including external resource pools
Installed packages for Python and R

For more general information about DMVs, see System Dynamic Management Views.

 Tip

You can also use the custom reports to monitor SQL Server Machine Learning
Services. For more information, see Monitor machine learning using custom
reports in Management Studio.

Dynamic management views


The following dynamic management views can be used when monitoring machine
learning workloads in SQL Server. To query the DMVs, you need VIEW SERVER STATE
permission on the instance.

Dynamic management view Type Description


Dynamic management view Type Description

sys.dm_external_script_requests Execution Returns a row for each


active worker account
that is running an
external script.

sys.dm_external_script_execution_stats Execution Returns one row for each


type of external script
request.

sys.dm_os_performance_counters Execution Returns a row per


performance counter
maintained by the
server. If you use the
search condition WHERE
object_name LIKE
'%External Scripts%' ,
you can use this
information to see how
many scripts ran, which
scripts were run using
which authentication
mode, or how many R or
Python calls were issued
on the instance overall.

sys.dm_resource_governor_external_resource_pools Resource Returns information


Governor about the current
external resource pool
state in Resource
Governor, the current
configuration of
resource pools, and
resource pool statistics.

sys.dm_resource_governor_external_resource_pool_affinity Resource Returns CPU affinity


Governor information about the
current external resource
pool configuration in
Resource Governor.
Returns one row per
scheduler in SQL Server
where each scheduler is
mapped to an individual
processor. Use this view
to monitor the condition
of a scheduler or to
identify runaway tasks.
For information about monitoring SQL Server instances, see Catalog Views and Resource
Governor Related Dynamic Management Views.

Settings and configuration


View the Machine Learning Services installation setting and configuration options.

Run the query below to get this output. For more information on the views and
functions used, see sys.dm_server_registry, sys.configurations, and SERVERPROPERTY.

SQL

SELECT CAST(SERVERPROPERTY('IsAdvancedAnalyticsInstalled') AS INT) AS


IsMLServicesInstalled

, CAST(value_in_use AS INT) AS ExternalScriptsEnabled

, COALESCE(SIGN(SUSER_ID(CONCAT (

CAST(SERVERPROPERTY('MachineName') AS NVARCHAR(128))

, '\SQLRUserGroup'

, CAST(serverproperty('InstanceName') AS NVARCHAR(128))

))), 0) AS ImpliedAuthenticationEnabled

, COALESCE((

SELECT CAST(r.value_data AS INT)

FROM sys.dm_server_registry AS r

WHERE r.registry_key LIKE 'HKLM\Software\Microsoft\Microsoft SQL


Server\%\SuperSocketNetLib\Tcp'

AND r.value_name = 'Enabled'

), - 1) AS IsTcpEnabled

FROM sys.configurations

WHERE name = 'external scripts enabled';

The query returns the following columns:

Column Description

IsMLServicesInstalled Returns 1 if SQL Server Machine Learning Services is installed for


the instance. Otherwise, returns 0.

ExternalScriptsEnabled Returns 1 if external scripts is enabled for the instance.


Otherwise, returns 0.

ImpliedAuthenticationEnabled Returns 1 if implied authentication is enabled. Otherwise, returns


0. The configuration for implied authentication is checked by
verifying if a login exists for SQLRUserGroup.
Column Description

IsTcpEnabled Returns 1 if the TCP/IP protocol is enabled for the instance.


Otherwise, returns 0. For more information, see Default SQL
Server Network Protocol Configuration.

Active sessions
View the active sessions running external scripts.

Run the query below to get this output. For more information on the dynamic
management views used, see sys.dm_exec_requests, sys.dm_external_script_requests,
and sys.dm_exec_sessions.

SQL

SELECT r.session_id, r.blocking_session_id, r.status, DB_NAME(s.database_id)


AS database_name

, s.login_name, r.wait_time, r.wait_type, r.last_wait_type,


r.total_elapsed_time, r.cpu_time

, r.reads, r.logical_reads, r.writes, er.language,


er.degree_of_parallelism, er.external_user_name

FROM sys.dm_exec_requests AS r

INNER JOIN sys.dm_external_script_requests AS er

ON r.external_script_request_id = er.external_script_request_id

INNER JOIN sys.dm_exec_sessions AS s

ON s.session_id = r.session_id;

The query returns the following columns:

Column Description

session_id Identifies the session associated with each active primary connection.

blocking_session_id ID of the session that is blocking the request. If this column is NULL, the
request is not blocked, or the session information of the blocking session
is not available (or cannot be identified).

status Status of the request.

database_name Name of the current database for each session.

login_name SQL Server login name under which the session is currently executing.
Column Description

wait_time If the request is currently blocked, this column returns the duration in
milliseconds, of the current wait. Is not nullable.

wait_type If the request is currently blocked, this column returns the type of wait.
For information about types of waits, see sys.dm_os_wait_stats.

last_wait_type If this request has previously been blocked, this column returns the type
of the last wait.

total_elapsed_time Total time elapsed in milliseconds since the request arrived.

cpu_time CPU time in milliseconds that is used by the request.

reads Number of reads performed by this request.

logical_reads Number of logical reads that have been performed by the request.

writes Number of writes performed by this request.

language Keyword that represents a supported script language.

degree_of_parallelism Number indicating the number of parallel processes that were created.
This value might be different from the number of parallel processes that
were requested.

external_user_name The Windows worker account under which the script was executed.

Execution statistics
View the execution statistics for the external runtime for R and Python. Only statistics of
RevoScaleR, revoscalepy, or microsoftml package functions are currently available.

Run the query below to get this output. For more information on the dynamic
management view used, see sys.dm_external_script_execution_stats. The query only
returns functions that have been executed more than once.

SQL

SELECT language, counter_name, counter_value

FROM sys.dm_external_script_execution_stats

WHERE counter_value > 0

ORDER BY language, counter_name;

The query returns the following columns:

Column Description

language Name of the registered external script language.

counter_name Name of a registered external script function.

counter_value Total number of instances that the registered external script function has been
called on the server. This value is cumulative, beginning with the time that the
feature was installed on the instance, and cannot be reset.

Performance counters
View the performance counters related to the execution of external scripts.

Run the query below to get this output. For more information on the dynamic
management view used, see sys.dm_os_performance_counters.

SQL

SELECT counter_name, cntr_value

FROM sys.dm_os_performance_counters

WHERE object_name LIKE '%External Scripts%'

sys.dm_os_performance_counters outputs the following performance counters for


external scripts:

Counter Description

Total Number of external processes started by local or remote calls.


Executions

Parallel Number of times that a script included the @parallel specification and that SQL
Executions Server was able to generate and use a parallel query plan.

Streaming Number of times that the streaming feature has been invoked.
Executions
Counter Description

SQL CC Number of external scripts run where the call was instantiated remotely and SQL
Executions Server was used as the compute context.

Implied Number of times that an ODBC loopback call was made using implied
Auth. authentication; that is, the SQL Server executed the call on behalf of the user
Logins sending the script request.

Total Time elapsed between the call and completion of call.


Execution
Time (ms)

Execution Number of times scripts reported errors. This count does not include R or Python
Errors errors.

Memory usage
View information about the memory used by the OS, SQL Server, and the external pools.

Run the query below to get this output. For more information on the dynamic
management views used, see sys.dm_resource_governor_external_resource_pools and
sys.dm_os_sys_info.

SQL

SELECT physical_memory_kb, committed_kb

, (SELECT SUM(peak_memory_kb)

FROM sys.dm_resource_governor_external_resource_pools AS ep

) AS external_pool_peak_memory_kb

FROM sys.dm_os_sys_info;

The query returns the following columns:

Column Description

physical_memory_kb The total amount of physical memory on the machine.

committed_kb The committed memory in kilobytes (KB) in the memory


manager. Does not include reserved memory in the memory
manager.

external_pool_peak_memory_kb The sum of the maximum amount of memory used, in


kilobytes, for all external resource pools.
Memory configuration
View information about the maximum memory configuration in percentage of SQL
Server and external resource pools. If SQL Server is running with the default value of max
server memory (MB) , it is considered as 100% of the OS memory.

Run the query below to get this output. For more information on the views used, see
sys.configurations and sys.dm_resource_governor_external_resource_pools.

SQL

SELECT 'SQL Server' AS name

, CASE CAST(c.value AS BIGINT)

WHEN 2147483647 THEN 100

ELSE (SELECT CAST(c.value AS BIGINT) / (physical_memory_kb / 1024.0)


* 100 FROM sys.dm_os_sys_info)

END AS max_memory_percent

FROM sys.configurations AS c

WHERE c.name LIKE 'max server memory (MB)'

UNION ALL

SELECT CONCAT ('External Pool - ', ep.name) AS pool_name,


ep.max_memory_percent

FROM sys.dm_resource_governor_external_resource_pools AS ep;

The query returns the following columns:

Column Description

name Name of the external resource pool or SQL Server.

max_memory_percent The maximum memory that SQL Server or the external resource pool can
use.

Resource pools
In SQL Server Resource Governor, a resource pool represents a subset of the physical
resources of an instance. You can specify limits on the amount of CPU, physical IO, and
memory that incoming application requests, including execution of external scripts, can
use within the resource pool. View the resource pools used for SQL Server and external
scripts.
Run the query below to get this output. For more information on the dynamic
management views used, see sys.dm_resource_governor_resource_pools and
sys.dm_resource_governor_external_resource_pools.

SQL

SELECT CONCAT ('SQL Server - ', p.name) AS pool_name

, p.total_cpu_usage_ms, p.read_io_completed_total,
p.write_io_completed_total

FROM sys.dm_resource_governor_resource_pools AS p

UNION ALL

SELECT CONCAT ('External Pool - ', ep.name) AS pool_name

, ep.total_cpu_user_ms, ep.read_io_count, ep.write_io_count

FROM sys.dm_resource_governor_external_resource_pools AS ep;

The query returns the following columns:

Column Description

pool_name Name of the resource pool. SQL Server resource pools are prefixed
with SQL Server and external resource pools are prefixed with
External Pool .

total_cpu_usage_hours The cumulative CPU usage in milliseconds since the Resource


Governor statistics were reset.

read_io_completed_total The total read IOs completed since the Resource Governor statistics
were reset.

write_io_completed_total The total write IOs completed since the Resource Governor statistics
were reset.

Installed packages
You can to view the R and Python packages that are installed in SQL Server Machine
Learning Services by executing an R or Python script that outputs these.

Installed packages for R


View the R packages installed in SQL Server Machine Learning Services.
Run the query below to get this output. The query use an R script to determine R
packages installed with SQL Server.

SQL

EXECUTE sp_execute_external_script @language = N'R'

, @script = N'

OutputDataSet <- data.frame(installed.packages()[,c("Package", "Version",


"Depends", "License", "LibPath")]);'

WITH result sets((Package NVARCHAR(255), Version NVARCHAR(100), Depends


NVARCHAR(4000)

, License NVARCHAR(1000), LibPath NVARCHAR(2000)));

The columns returned are:

Column Description

Package Name of the installed package.

Version Version of the package.

Depends Lists the package(s) that the installed package depends on.

License License for the installed package.

LibPath Directory where you can find the package.

Installed packages for Python


View the Python packages installed in SQL Server Machine Learning Services.
Run the query below to get this output. The query use an Python script to determine the
Python packages installed with SQL Server.

SQL

EXECUTE sp_execute_external_script @language = N'Python'

, @script = N'

import pkg_resources

import pandas

OutputDataSet = pandas.DataFrame(sorted([(i.key, i.version, i.location) for


i in pkg_resources.working_set]))'

WITH result sets((Package NVARCHAR(128), Version NVARCHAR(128), Location


NVARCHAR(1000)));

The columns returned are:

Column Description

Package Name of the installed package.

Version Version of the package.

Location Directory where you can find the package.

Next steps
Extended events for machine learning
Resource Governor Related Dynamic Management Views
System Dynamic Management Views
Monitor machine learning using custom reports in Management Studio
Monitor Python and R scripts with
extended events in SQL Server Machine
Learning Services
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

Learn how to use extended events to monitor and troubleshooting operations related to
the SQL Server Machine Learning Services, SQL Server Launchpad, and Python or R jobs
external scripts.

Extended events for SQL Server Machine


Learning Services
To view a list of events related to SQL Server Machine Learning Services, run the
following query from Azure Data Studio or SQL Server Management Studio.

SQL

SELECT o.name AS event_name, o.description

FROM sys.dm_xe_objects o

JOIN sys.dm_xe_packages p

ON o.package_guid = p.guid

WHERE o.object_type = 'event'

AND p.name = 'SQLSatellite';

For more information about how to use extended events, see Extended Events Tools.

Additional events specific to Machine Learning


Services
Additional extended events are available for components that are related to and used
by SQL Server Machine Learning Services, such as the SQL Server Launchpad, and
BXLServer, and the satellite process that starts the Python or R runtime. These additional
extended events are fired from the external processes; therefore, they must be captured
using an external utility.

For more information about how to do this, see the section, Collecting events from
external processes.
Table of extended events
Event Description Notes

connection_accept Occurs when


a new
connection is
accepted. This
event serves
to log all
connection
attempts.

failed_launching Launching Indicates an error.


failed.

satellite_abort_connection Abort
connection
record

satellite_abort_received Fires when an


abort
message is
received over
a satellite
connection.

satellite_abort_sent Fires when an


abort
message is
sent over
satellite
connection.

satellite_authentication_completion Fires when


authentication
completes for
a connection
over TCP or
Named pipe.

satellite_authorization_completion Fires when


authorization
completes for
a connection
over TCP or
Named pipe.
Event Description Notes

satellite_cleanup Fires when Fired only from external process. See


satellite calls instructions on collecting events from
cleanup. external processes.

satellite_data_chunk_sent Fires when The event reports the number of rows


the satellite sent, the number of columns, the
connection number of SNI packets used and time
finishes elapsed in milliseconds while sending
sending a the chunk. The information can help
single data you understand how much time is
chunk. spent passing different types of data,
and how many packets are used.

satellite_data_receive_completion Fires when all Fired only from external process. See
the required instructions on collecting events from
data by a external processes.
query is
received over
the satellite
connection.

satellite_data_send_completion Fires when all


required data
for a session
is sent over
the satellite
connection.

satellite_data_send_start Fires when Data transmission starts just before the


data first data chunk is sent.
transmission
starts.

satellite_error Used for


tracing sql
satellite error

satellite_invalid_sized_message Message's
size is not
valid

satellite_message_coalesced Used for


tracing
message
coalescing at
networking
layer
Event Description Notes

satellite_message_ring_buffer_record message ring


buffer record

satellite_message_summary summary
information
about
messaging

satellite_message_version_mismatch Message's
version field is
not matched

satellite_messaging Used for


tracing
messaging
event (bind,
unbind, etc.)

satellite_partial_message Used for


tracing partial
message at
networking
layer

satellite_schema_received Fires when


schema
message is
received and
read by SQL.

satellite_schema_sent Fires when Fired only from external process. See


schema instructions on collecting events from
message is external processes.
sent by the
satellite.

satellite_service_start_posted Fires when This tells Launchpad to start the


service start external process, and contains an ID
message is for the new session.
posted to
launchpad.

satellite_unexpected_message_received Fires when an Indicates an error.


unexpected
message is
received.
Event Description Notes

stack_trace Occurs when Indicates an error.


a memory
dump of the
process is
requested.

trace_event Used for These events can contain SQL Server,


tracing Launchpad, and external process trace
purposes messages. This includes output to
stdout and stderr from R.

launchpad_launch_start Fires when Fired only from Launchpad. See


launchpad instructions on collecting events from
starts launchpad.exe.
launching a
satellite.

launchpad_resume_sent Fires when Fired only from Launchpad. See


launchpad instructions on collecting events from
has launched launchpad.exe.
the satellite
and sent a
resume
message to
SQL Server.

satellite_data_chunk_sent Fires when Contains information about the


the satellite number of columns, number of rows,
connection number of packets, and time elapsed
finishes sending the chunk.
sending a
single data
chunk.

satellite_sessionId_mismatch Message's
session ID is
not expected

Collecting events from external processes


SQL Server Machine Learning Services starts some services that run outside of the SQL
Server process. To capture events related to these external processes, you must create
an events trace configuration file and place the file in the same directory as the
executable for the process.

SQL Server Launchpad


To capture events related to the Launchpad, place the .xml file in the Binn directory
for the SQL Server instance. In a default installation, this would be:

C:\Program Files\Microsoft SQL

Server\MSSQL_version_number.MSSQLSERVER\MSSQL\Binn .

BXLServer is the satellite process that supports SQL extensibility with external
script languages, such as R or Python. A separate instance of BxlServer is launched
for each external language instance.

To capture events related to BXLServer, place the .xml file in the R or Python
installation directory. In a default installation, this would be:

R: C:\Program Files\Microsoft SQL


Server\MSSQL_version_number.MSSQLSERVER\R_SERVICES\library\RevoScaleR\rxLibs\x

64 .

Python: C:\Program Files\Microsoft SQL


Server\MSSQL_version_number.MSSQLSERVER\PYTHON_SERVICES\Lib\site-
packages\revoscalepy\rxLibs .

The configuration file must be named the same as the executable, using the format "
[name].xevents.xml". In other words, the files must be named as follows:

Launchpad.xevents.xml

bxlserver.xevents.xml

The configuration file itself has the following format:

XML

<?xml version="1.0" encoding="utf-8"?>

<event_sessions>

<event_session name="[session name]" maxMemory="1" dispatchLatency="1"


MaxDispatchLatency="2 SECONDS">

<description owner="you">Xevent for launchpad or bxl server.


</description>

<event package="SQLSatellite" name="[XEvent Name 1]" />

<event package="SQLSatellite" name="[XEvent Name 2]" />

<target package="package0" name="event_file">

<parameter name="filename" value="[SessionName].xel" />

<parameter name="max_file_size" value="10" />

<parameter name="max_rollover_files" value="10" />

</target>

</event_session>

</event_sessions>

To configure the trace, edit the session name placeholder, the placeholder for the
filename ( [SessionName].xel ), and the names of the events you want to capture,
For example, [XEvent Name 1] , [XEvent Name 1] ).
Any number of event package tags may appear, and will be collected as long as
the name attribute is correct.

Example: Capturing Launchpad events


The following example shows the definition of an event trace for the Launchpad service:

XML

<?xml version="1.0" encoding="utf-8"?>

<event_sessions>

<event_session name="sqlsatelliteut" maxMemory="1" dispatchLatency="1"


MaxDispatchLatency="2 SECONDS">

<description owner="hay">Xevent for sql tdd runner.</description>

<event package="SQLSatellite" name="launchpad_launch_start" />

<event package="SQLSatellite" name="launchpad_resume_sent" />

<target package="package0" name="event_file">

<parameter name="filename" value="launchpad_session.xel" />

<parameter name="max_file_size" value="10" />

<parameter name="max_rollover_files" value="10" />

</target>

</event_session>

</event_sessions>

Place the .xml file in the Binn directory for the SQL Server instance.
This file must be named Launchpad.xevents.xml .

Example: Capturing BXLServer events


The following example shows the definition of an event trace for the BXLServer
executable.

XML

<?xml version="1.0" encoding="utf-8"?>

<event_sessions>

<event_session name="sqlsatelliteut" maxMemory="1" dispatchLatency="1"


MaxDispatchLatency="2 SECONDS">

<description owner="hay">Xevent for sql tdd runner.</description>

<event package="SQLSatellite" name="satellite_abort_received" />

<event package="SQLSatellite" name="satellite_authentication_completion"


/>

<event package="SQLSatellite" name="satellite_cleanup" />

<event package="SQLSatellite" name="satellite_data_receive_completion"


/>

<event package="SQLSatellite" name="satellite_data_send_completion" />

<event package="SQLSatellite" name="satellite_data_send_start" />

<event package="SQLSatellite" name="satellite_schema_sent" />

<event package="SQLSatellite"
name="satellite_unexpected_message_received" />

<event package="SQLSatellite" name="satellite_data_chunk_sent" />

<target package="package0" name="event_file">

<parameter name="filename" value="satellite_session.xel" />

<parameter name="max_file_size" value="10" />

<parameter name="max_rollover_files" value="10" />

</target>

</event_session>

</event_sessions>

Place the .xml file in the same directory as the BXLServer executable.
This file must be named bxlserver.xevents.xml .

Next steps
Monitor Python and R script execution using custom reports in SQL Server
Management Studio
Monitor SQL Server Machine Learning Services using dynamic management views
(DMVs)
Monitor PREDICT T-SQL statements
with extended events in SQL Server
Machine Learning Services
Article • 03/03/2023

Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance

Learn how to use extended events to monitor and troubleshooting PREDICT T-SQL
statements in SQL Server Machine Learning Services.

Table of extended events


The following extended events are available in all versions of SQL Server that support
the PREDICT T-SQL statement.

name object_type description

predict_function_completed event Builtin execution time breakdown

predict_model_cache_hit event Occurs when a model is retrieved from the


PREDICT function model cache. Use this event
along with other predict_model_cache_* events to
troubleshoot issues caused by the PREDICT
function model cache.

predict_model_cache_insert event Occurs when a model is insert into the PREDICT


function model cache. Use this event along with
other predict_model_cache_* events to
troubleshoot issues caused by the PREDICT
function model cache.

predict_model_cache_miss event Occurs when a model is not found in the PREDICT


function model cache. Frequent occurrences of
this event could indicate that SQL Server needs
more memory. Use this event along with other
predict_model_cache_* events to troubleshoot
issues caused by the PREDICT function model
cache.

predict_model_cache_remove event Occurs when a model is removed from model


cache for PREDICT function. Use this event along
with other predict_model_cache_* events to
troubleshoot issues caused by the PREDICT
function model cache.
Query for related events
To view a list of all columns returned for these events, run the following query in SQL
Server Management Studio:

SQL

SELECT *

FROM sys.dm_xe_object_columns

WHERE object_name LIKE 'predict%'

Examples
To capture information about performance of a scoring session using PREDICT:

1. Create a new extended event session, using Management Studio or another


supported tool.
2. Add the events predict_function_completed and predict_model_cache_hit to the
session.
3. Start the extended event session.
4. Run the query that uses PREDICT.

In the results, review these columns:

The value for predict_function_completed shows how much time the query spent
on loading the model and scoring.
The boolean value for predict_model_cache_hit indicates whether the query used
a cached model or not.

Native scoring model cache


In addition to the events specific to PREDICT, you can use the following queries to get
more information about the cached model and cache usage:

View the native scoring model cache:

SQL

SELECT *

FROM sys.dm_os_memory_clerks

WHERE type = 'CACHESTORE_NATIVESCORING';

View the objects in the model cache:


SQL

SELECT *

FROM sys.dm_os_memory_objects

WHERE TYPE = 'MEMOBJ_NATIVESCORING';

Next steps
For more information about extended events (sometimes called XEvents), and how to
track events in a session, see these articles:

Monitor Python and R scripts with extended events in SQL Server Machine
Learning Services
Extended Events concepts and architecture
Set up event capture in SSMS
Manage event sessions in the Object Explorer
Grant database users permission to
execute Python and R scripts with SQL
Server Machine Learning Services
Article • 03/03/2023

Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance

Learn how you can give a database user permission to run external Python and R scripts
in SQL Server Machine Learning Services and give read, write, or data definition
language (DDL) permissions to databases.

For more information, see the permissions section in Security overview for the
extensibility framework.

Permission to run scripts


For each user who runs Python or R scripts with SQL Server Machine Learning Services,
and who are not an administrator, you must grant them the permission to run external
scripts in each database where the language is used.

To grant permission to a database user to execute external script, run the following
script:

SQL

USE <database_name>

GO

GRANT EXECUTE ANY EXTERNAL SCRIPT TO [UserName]

7 Note

Permissions are not specific to the supported script language. In other words, there
are not separate permission levels for R script versus Python script.

Grant database permissions


While a database user is running scripts, the database user might need to read data
from other databases. The database user might also need to create new tables to store
results, and write data into tables.

For each database user account or SQL login that is running R or Python scripts, ensure
that it has the appropriate permissions on the specific database:

db_datareader to read data.


db_datawriter to save objects to the database.

db_ddladmin to create objects such as stored procedures or tables containing

trained and serialized data.

For example, the following Transact-SQL statement gives the SQL login MySQLLogin the
rights to run T-SQL queries in the ML_Samples database. To run this statement, the SQL
login must already exist in the security context of the server. For more information, see
sp_addrolemember (Transact-SQL).

SQL

USE ML_Samples

GO

EXEC sp_addrolemember 'db_datareader', 'MySQLLogin'

Next steps
For more information about the permissions included in each role, see Database-level
roles.
Linked Servers (Database Engine)
Article • 03/03/2023

Applies to:
SQL Server
Azure SQL Managed Instance

Linked servers enable the SQL Server Database Engine and Azure SQL Managed
Instance to read data from the remote data sources and execute commands against the
remote database servers (for example, OLE DB data sources) outside of the instance of
SQL Server. Typically linked servers are configured to enable the Database Engine to
execute a Transact-SQL statement that includes tables in another instance of SQL Server,
or another database product such as Oracle. Many types OLE DB data sources can be
configured as linked servers, including third-party database providers and Azure
CosmosDB.

7 Note

Linked servers are available in SQL Server Database Engine and Azure SQL
Managed Instance. They are not enabled in Azure SQL Database singleton and
elastic pools. There are some constraints in Managed Instance that can be found
here.

When to use linked servers?


Linked servers enable you to implement distributed databases that can fetch and update
data in other databases. They are a good solution in the scenarios where you need to
implement database sharding without need to create a custom application code or
directly load from remote data sources. Linked servers offer the following advantages:

The ability to access data from outside of SQL Server.

The ability to issue distributed queries, updates, commands, and transactions on


heterogeneous data sources across the enterprise.

The ability to address diverse data sources similarly.

You can configure a linked server by using SQL Server Management Studio or by using
the sp_addlinkedserver (Transact-SQL) statement. OLE DB providers vary greatly in the
type and number of parameters required. For example, some providers require you to
provide a security context for the connection using sp_addlinkedsrvlogin (Transact-SQL).
Some OLE DB providers allow SQL Server to update data on the OLE DB source. Others
provide only read-only data access. For information about each OLE DB provider,
consult documentation for that OLE DB provider.

Linked server components


A linked server definition specifies the following objects:

An OLE DB provider

An OLE DB data source

An OLE DB provider is a DLL that manages and interacts with a specific data source. An
OLE DB data source identifies the specific database that can be accessed through OLE
DB. Although data sources queried through linked server definitions are ordinarily
databases, OLE DB providers exist for a variety of files and file formats. These include
text files, spreadsheet data, and the results of full-text content searches.

Starting with SQL Server 2019 (15.x), the Microsoft OLE DB Driver for SQL Server
(MSOLEDBSQL) (PROGID: MSOLEDBSQL) is the default OLE DB provider. In earlier
versions, the SQL Server Native Client OLE DB provider (SQLNCLI) (PROGID: SQLNCLI11)
was the default OLE DB provider.

) Important

The SQL Server Native Client (often abbreviated SNAC) has been removed from
SQL Server 2022 (16.x) and SQL Server Management Studio 19 (SSMS). Both the
SQL Server Native Client OLE DB provider (SQLNCLI or SQLNCLI11) and the legacy
Microsoft OLE DB Provider for SQL Server (SQLOLEDB) are not recommended for
new development. Switch to the new Microsoft OLE DB Driver (MSOLEDBSQL) for
SQL Server going forward.

Linked servers to Microsoft Access and Excel sources are only supported by Microsoft
when using the 32-bit Microsoft.JET.OLEDB.4.0 OLE DB provider.

7 Note

SQL Server distributed queries are designed to work with any OLE DB provider that
implements the required OLE DB interfaces. However, SQL Server has been tested
against the default OLE DB provider.
Linked server details
The following illustration shows the basics of a linked server configuration.

Typically, linked servers are used to handle distributed queries. When a client application
executes a distributed query through a linked server, SQL Server parses the command
and sends requests to OLE DB. The rowset request may be in the form of executing a
query against the provider or opening a base table from the provider.

7 Note

For a data source to return data through a linked server, the OLE DB provider (DLL)
for that data source must be present on the same server as the instance of SQL
Server.

) Important

When an OLE DB provider is used, the account under which the SQL Server service
runs must have read and execute permissions for the directory, and all
subdirectories, in which the provider is installed. This includes Microsoft-released
providers, and any third-party providers.

7 Note
Linked servers support Active Directory pass-through authentication when using
full delegation. Starting with SQL Server 2017 (14.x) CU17, pass-through
authentication with constrained delegation is also supported; however, resource-
based constrained delegation is not supported.

Manage providers
There is a set of options that control how SQL Server loads and uses OLE DB providers
that are specified in the registry.

Manage linked server definitions


When you are setting up a linked server, register the connection information and data
source information with SQL Server. After being registered, that data source can be
referred to with a single logical name.

You can use stored procedures and catalog views to manage linked server definitions:

Create a linked server definition by running sp_addlinkedserver .

View information about the linked servers defined in a specific instance of SQL
Server by running a query against the sys.servers system catalog view.

Delete a linked server definition by running sp_dropserver . You can also use this
stored procedure to remove a remote server.

You can also define linked servers by using SQL Server Management Studio. In the
Object Explorer, right-click Server Objects, select New, and select Linked Server. You
can delete a linked server definition by right-clicking the linked server name and
selecting Delete.

When you execute a distributed query against a linked server, include a fully qualified,
four-part table name for each data source to query. This four-part name should be in
the form linked_server_name.catalog.schema.object_name.

7 Note

Linked servers can be defined to point back (loop back) to the server on which they
are defined. Loopback servers are most useful when testing an application that uses
distributed queries on a single server network. Loopback linked servers are
intended for testing and are not supported for many operations, such as
distributed transactions.

Azure SQL Managed Instance linked server


authentication
Azure SQL Managed Instance linked servers support both SQL authentication, and Azure
AD (AAD) authentication. Two supported AAD authentication modes are: Managed
identity and pass-through. Managed identity authentication can be used to allow local
logins to query remote linked servers. Pass-through authentication allows a principal
that can authenticate with a local instance to access a remote instance via linked server.
Prerequisites for pass-through authentication are that the same principal is added as a
login on the remote server and that both instances are members of the SQL trust group.

7 Note

Existing definitions of linked servers that were configured for pass-through mode
will support Azure AD authentication. The only requirement for this would be to
add Managed Instances to Server Trust Group.

Limitations of Azure AD authentication


Azure AD authentication is not supported for Managed Instances in different Azure
AD tenants.
Azure AD authentication for linked servers is supported only with OLE DB driver
version 18.2.1 and higher.
Azure AD authentication for linked servers from Managed Instance to SQL Server is
supported for mapped local logins only. Propagating security context is not
supported. That means that managed identity authentication is supported, while
pass-through authentication is not supported.

MSOLEDBSQL19 and linked servers


Currently, MSOLEDBSQL19 prevents the creation of linked servers without encryption
and a trusted certificate (a self-signed certificate is insufficient). If linked servers are
required, use the existing supported version of MSOLEDBSQL.

See also
sys.servers (Transact-SQL)
sp_linkedservers (Transact-SQL)

Next steps
Create Linked Servers (SQL Server Database Engine)
sp_addlinkedserver (Transact-SQL)
sp_addlinkedsrvlogin (Transact-SQL)
sp_dropserver (Transact-SQL)
Service Broker
Article • 11/18/2022

Applies to:
SQL Server
Azure SQL Managed Instance

SQL Server Service Broker provide native support for messaging and queuing in the SQL
Server Database Engine and Azure SQL Managed Instance. Developers can easily create
sophisticated applications that use the Database Engine components to communicate
between disparate databases, and build distributed and reliable applications.

When to use Service Broker


Use Service Broker components to implement native in-database asynchronous
message processing functionalities. Application developers who use Service Broker can
distribute data workloads across several databases without programming complex
communication and messaging internals. Service Broker reduces development and test
work because Service Broker handles the communication paths in the context of a
conversation. It also improves performance. For example, front-end databases
supporting Web sites can record information and send process intensive tasks to queue
in back-end databases. Service Broker ensures that all tasks are managed in the context
of transactions to assure reliability and technical consistency.

Overview
Service Broker is a message delivery framework that enables you to create native in-
database service-oriented applications. Unlike classic query processing functionalities
that constantly read data from the tables and process them during the query lifecycle, in
service-oriented application you have database services that are exchanging the
messages. Every service has a queue where the messages are placed until they are
processed.
The messages in the queues can be fetched using the Transact-SQL RECEIVE command
or by the activation procedure that will be called whenever the message arrives in the
queue.

Creating services
Database services are created by using the CREATE SERVICE Transact SQL statement.
Service can be associated with the message queue create by using the CREATE QUEUE
statement:

SQL

CREATE QUEUE dbo.ExpenseQueue;

GO

CREATE SERVICE ExpensesService

ON QUEUE dbo.ExpenseQueue;

Sending messages
Messages are sent on the conversation between the services using the SEND Transact-
SQL statement. A conversation is a communication channel that is established between
the services using the BEGIN DIALOG Transact-SQL statement.

SQL

DECLARE @dialog_handle UNIQUEIDENTIFIER;

BEGIN DIALOG @dialog_handle

FROM SERVICE ExpensesClient

TO SERVICE 'ExpensesService';

SEND ON CONVERSATION @dialog_handle (@Message) ;

The message will be sent to the ExpenssesService and placed in dbo.ExpenseQueue .


Because there is no activation procedure associated to this queue, the message will
remain in the queue until someone reads it.

Processing messages
The messages that are placed in the queue can be selected by using a standard SELECT
query. The SELECT statement will not modify the queue and remove the messages. To
read and pull the messages from the queue, you can use the RECEIVE Transact-SQL
statement.
SQL

RECEIVE conversation_handle, message_type_name, message_body

FROM ExpenseQueue;

Once you process all messages from the queue, you should close the conversation using
the END CONVERSATION Transact-SQL statement.

Where is the documentation for Service


Broker?
The reference documentation for Service Broker is included in the SQL Server
documentation. This reference documentation includes the following sections:

Data Definition Language (DDL) Statements (Transact-SQL) for CREATE, ALTER, and
DROP statements

Service Broker Statements

Service Broker Catalog Views (Transact-SQL)

Service Broker Related Dynamic Management Views (Transact-SQL)

ssbdiagnose Utility (Service Broker)

See the previously published documentation for Service Broker concepts and for
development and management tasks. This documentation is not reproduced in the SQL
Server documentation due to the small number of changes in Service Broker in recent
versions of SQL Server.

What's new in Service Broker

Service broker and Azure SQL Managed Instance


Cross-instance service broker message exchange is supported only between Azure SQL
Managed Instances:

CREATE ROUTE : You can't use CREATE ROUTE with ADDRESS other than LOCAL or
DNS name of another SQL Managed Instance. Port specified must be 4022. See
CREATE ROUTE.
ALTER ROUTE : You can't use ALTER ROUTE with ADDRESS other than LOCAL or DNS
name of another SQL Managed Instance. Port specified must be 4022. See See
ALTER ROUTE.

Transport security is supported, dialog security is not:

CREATE REMOTE SERVICE BINDING is not supported.

Service broker is enabled by default and cannot be disabled. The following ALTER
DATABASE options are not supported:

ENABLE_BROKER

DISABLE_BROKER

No significant changes were introduced in SQL Server 2019 (15.x). The following
changes were introduced in SQL Server 2012 (11.x).

Messages can be sent to multiple target services


(multicast)
The syntax of the SEND (Transact-SQL) statement has been extended to enable multicast
by supporting multiple conversation handles.

Queues expose the message enqueued time


Queues have a new column, message_enqueue_time, that shows how long a message
has been in the queue.

Poison message handling can be disabled


The CREATE QUEUE (Transact-SQL) and ALTER QUEUE (Transact-SQL) statements now
have the ability to enable or disable poison message handling by adding the clause,
POISON_MESSAGE_HANDLING (STATUS = ON | OFF) . The catalog view sys.service_queues

now has the column is_poison_message_handling_enabled to indicate whether poison


message is enabled or disabled.

Always On support in Service Broker


For more information, see Service Broker with Always On Availability Groups (SQL
Server).

Next steps
The most common use of Service Broker is for event notifications. Learn how to
implement event notifications, configure dialog security, or get more information.
Database Mail
Article • 02/28/2023

Applies to:
SQL Server
Azure SQL Managed Instance

Database Mail is an enterprise solution for sending e-mail messages from the SQL
Server Database Engine or Azure SQL Managed Instance. Your applications can send e-
mail messages to users using Database Mail via an external SMTP server. The messages
can contain query results, and can also include files from any resource on your network.

7 Note

Database Mail is available in SQL Server Database Engine and Azure SQL Managed
Instance, but not in Azure SQL database singleton and elastic pools. For more
information on using Database Mail in Azure SQL Managed Instance, see Automate
management tasks using SQL Agent jobs in Azure SQL Managed Instance.

Benefits of using Database Mail


Database Mail is designed for reliability, scalability, security, and supportability.

Reliability
Database Mail uses the standard Simple Mail Transfer Protocol (SMTP) to send
mail. You can use Database Mail without installing an Extended MAPI client on the
computer that runs SQL Server.

Process isolation. To minimize the impact on SQL Server, the component that
delivers e-mail runs outside of SQL Server, in a separate process. SQL Server will
continue to queue e-mail messages even if the external process stops or fails. The
queued messages will be sent once the outside process or SMTP server comes
online.

Failover accounts. A Database Mail profile allows you to specify more than one
SMTP server. Should an SMTP server be unavailable, mail can still be delivered to
another SMTP server.

Cluster support. Database Mail is cluster-aware and is fully supported on a cluster.

Scalability
Background Delivery: Database Mail provides background, or asynchronous,
delivery. When you call sp_send_dbmail to send a message, Database Mail adds a
request to a Service Broker queue. The stored procedure returns immediately. The
external e-mail component receives the request and delivers the e-mail.

Multiple profiles: Database Mail allows you to create multiple profiles within a SQL
Server instance. Optionally, you can choose the profile that Database Mail uses
when you send a message.

Multiple accounts: Each profile can contain multiple failover accounts. You can
configure different profiles with different accounts to distribute e-mail across
multiple e-mail servers.

64-bit compatibility: Database Mail is fully supported on 64-bit installations of SQL


Server.

Security
Off by default: To reduce the surface area of SQL Server, Database Mail stored
procedures are disabled by default.

Mail Security:To send Database Mail, you must be a member of the


DatabaseMailUserRole database role in the msdb database.

Profile security: Database Mail enforces security for mail profiles. You choose the
msdb database users or groups that have access to a Database Mail profile. You can

grant access to either specific users, or all users in msdb . A private profile restricts
access to a specified list of users. A public profile is available to all users in a
database.

Attachment size governor: Database Mail enforces a configurable limit on the


attachment file size. You can change this limit by using the sysmail_configure_sp
stored procedure.

Prohibited file extensions: Database Mail maintains a list of prohibited file


extensions. Users cannot attach files with an extension that appears in the list. You
can change this list by using sysmail_configure_sp.

Database Mail runs under the SQL Server Engine service account. To attach a file
from a folder to an email, the SQL Server engine account should have permissions
to access the folder with the file.

Supportability
Integrated configuration: Database Mail maintains the information for e-mail
accounts within SQL Server Database Engine. There is no need to manage a mail
profile in an external client application. Database Mail Configuration Wizard
provides a convenient interface for configuring Database Mail. You can also create
and maintain Database Mail configurations using Transact-SQL.

Logging. Database Mail logs e-mail activity to SQL Server, the Microsoft Windows
application event log, and to tables in the msdb database.

Auditing: Database Mail keeps copies of messages and attachments sent in the
msdb database. You can easily audit Database Mail usage and review the retained
messages.

Support for HTML: Database Mail allows you to send e-mail formatted as HTML.

Database Mail Architecture


Database Mail is designed on a queued architecture that uses service broker
technologies. When users execute sp_send_dbmail , the stored procedure inserts an item
into the mail queue and creates a record that contains the e-mail message. Inserting the
new entry in the mail queue starts the external Database Mail process
(DatabaseMail.exe). The external process reads the e-mail information and sends the e-
mail message to the appropriate e-mail server or servers. The external process inserts an
item in the Status queue for the outcome of the send operation. Inserting the new entry
in the status queue starts an internal stored procedure that updates the status of the e-
mail message. Besides storing the sent, or unsent, e-mail message, Database Mail also
records any e-mail attachments in the system tables. Database Mail views provide the
status of messages for troubleshooting, and stored procedures allow for administration
of the Database Mail queue.
Introduction to Database Mail components
Database Mail consists of the following main components:

Configuration and security components

Database Mail stores configuration and security information in the msdb database.
Configuration and security objects create profiles and accounts used by Database
Mail.

Messaging components

The msdb database acts as the mail-host database that holds the messaging
objects that Database Mail uses to send e-mail. These objects include the
sp_send_dbmail stored procedure and the data structures that hold information
about messages.

Database Mail executable

The Database Mail executable is an external program that reads from a queue in
the msdb database and sends messages to e-mail servers.

Logging and auditing components


Database Mail records logging information in the msdb database and the Microsoft
Windows application event log.

Configuring SQL Agent to use Database Mail


SQL Server Agent can be configured to use Database Mail. This is required for alert
notifications and automatic notification when a job completes.

2 Warning

Individual job steps within a job can also send e-mail without configuring SQL
Server Agent to use Database Mail. For example, a Transact-SQL job step can use
Database Mail to send the results of a query to a list of recipients.

You can configure SQL Server Agent to send e-mail messages to predefined operators
when:

An alert is triggered. Alerts can be configured to send e-mail notification of specific


events that occur. For example, alerts can be configured to notify an operator of a
particular database event or operating system condition that may need immediate
action. For more information about configuring alerts, see Alerts.

A scheduled task, such as a database backup or replication event, succeeds or fails.


For example, you can use SQL Server Agent Mail to notify operators if an error
occurs during processing at the end of a month.

See also
Database Mail Configuration Objects
Database Mail Messaging Objects
Database Mail External Program
Database Mail Log and Audits

Next steps
Configure Database Mail
Configure SQL Server Agent Mail to Use Database Mail
Automate management tasks using SQL Agent jobs in Azure SQL Managed
Instance
Migrate SQL Managed Instance to
availability zone support
Article • 05/26/2023

) Important

Zone redundancy for SQL Managed Instance is currently in Preview. To learn which
regions support SQL Instance zone redundancy, see Services support by region.

SQL Managed Instance offers a zone redundant configuration that uses Azure
availability zones to replicate your instances across multiple physical locations within an
Azure region. With zone redundancy enabled, your Business Critical managed instances
become resilient to a larger set of failures, such as catastrophic datacenter outages,
without any changes to application logic. For more information on the availability model
for SQL Database, see Business Critical service tier zone redundant availability section in
the Azure SQL documentation.

This guide describes how to migrate SQL Managed Instances that use Business Critical
service tier from non-availability zone support to availability zone support. Once the
zone redundant option is enabled, Azure SQL Managed Instance automatically
reconfigures the instance.

Prerequisites
To migrate to availability-zone support:

1. Your instance must be running under Business Critical tier with the November 2022
feature wave update. To learn more about how to onboard an existing SQL
managed instance to the November 2022 update, see November 2022 Feature
Wave for Azure SQL Managed Instance

2. Confirm that your instance is located in a supported region. To see the list of
supported regions, see Premium and Business Critical service tier zone redundant
availability:

3. Your instances must be running on standard-series (Gen5) hardware.

Downtime requirements
All scaling operations in Azure SQL are online operations and require minimal to no
downtime. For more details on Azure SQL dynamic scaling, see Dynamically scale
database resources with minimal downtime.

How to enable the zone redundant


configuration
You can configure the zone redundant option by using either Azure portal or ARM API.

To enable the zone redundant option:

Azure portal

To update a current Business Critical managed instance to use a zone redundant


configuration:

1. Sign in to the Azure portal .

2. Go to the instance of SQL Managed Instance that you want to enable for zone
redundancy.

3. In the Create Azure SQL Managed Instance tab, select Configure Managed
Instance.

4. In the Compute + Storage page, select Yes to make the instance zone
redundant.

5. For Backup storage redundancy, choose one of the compatible redundancy


options:

ZRS (Zone Redundant Storage)


GZRS (Geo Zone Redundant Storage)

To learn more about backup storage redundancy options, see Introducing


Geo-Zone Redundant Storage (GZRS) for Azure SQL Managed Instance
backups .

6. Select Apply.

Next steps
Get started with SQL Managed Instance with our Quick Start reference guide

Learn more about Azure SQL Managed Instance zone redundancy and high
availability
SQL Server on Azure VM documentation
Find concepts, quickstarts, tutorials, and samples for SQL Server installed to Azure virtual
machines, both Windows and Linux.

SQL Server on Azure VM

f QUICKSTART

Create Windows SQL VM (portal)

Create Windows SQL VM (PowerShell)

Create Linux SQL VM (portal)

q VIDEO

SQL Server on Azure VM overview

e OVERVIEW

What's new?

What is SQL Server on Windows VM?

What is SQL Server on Linux VM?

Security considerations

Performance guidelines

Pricing guidance

Manage

p CONCEPT

SQL Server IaaS Agent extension

Manage with Azure portal

Register with SQL VM resource provider

Automated patching

Change license type


Change edition of SQL Server

Move to new region

Integrate with Azure Key Vault

Business continuity

e OVERVIEW

High availability & disaster recovery

Backup and restore

Availability groups

c HOW-TO GUIDE

Availability group (Az CLI)

Clusterless availability group

FCI (Storage Spaces Direct)

FCI (Premium File Share)

g TUTORIAL

Availability group (manual)

STONITH availability group (RHEL)

STONITH availability group (SLES)

Availability group listener (RHEL)

Learn Azure SQL

d TRAINING

Azure SQL for beginners

Azure SQL fundamentals

Azure SQL hands-on labs

Azure SQL bootcamp


Educational SQL resources

Reference

` DEPLOY

Azure portal

Azure CLI

PowerShell samples

ARM template samples

a DOWNLOAD

SQL Server Management Studio (SSMS)

Azure Data Studio

SQL Server Data Tools

Visual Studio 2019

i REFERENCE

Migration guide

Transact-SQL (T-SQL)

Azure CLI

PowerShell

REST API
What's new with SQL Server on Azure
Virtual Machines?
Article • 07/14/2023

Applies to: SQL Server on Azure VM

When you deploy an Azure virtual machine (VM) with SQL Server installed on it, either
manually, or through a built-in image, you can use Azure features to improve your
experience. This article summarizes the documentation changes associated with new
features and improvements in the recent releases of SQL Server on Azure Virtual
Machines (VMs) . To learn more about SQL Server on Azure VMs, see the overview.

For updates made in previous years, see the What's new archive.

July 2023

7 Note

SQL Server 2008 and SQL Server 2008 R2 are out of extended support and no
longer available from the Azure Marketplace.

May 2023
Changes Details

Azure SQL bindings for Azure Functions GA Azure Functions supports input bindings, and
output bindings for the Azure SQL and SQL
Server products. This feature is now generally
available. Review Azure SQL bindings for
Azure Functions to learn more.

Azure SQL triggers for Azure Functions preview


Azure Functions supports function triggers for the
Azure SQL and SQL Server products. This feature
is currently in preview. Review Azure SQL triggers
for Azure Functions to learn more.

April 2023
Changes Details

Auto upgrade SQL It's now possible to enable auto upgrade for your SQL IaaS Agent
IaaS Agent extension to ensure you're automatically receiving the latest updates to
extension the extension every month. Review SQL IaaS Agent Settings to learn more.

Azure AD Azure Active Directory (Azure AD) authentication is now generally


authentication GA available. Review Configure Azure AD to learn more.

Migrate AG to Learn how to migrate your single-subnet Always On availability group to


multi-subnet multiple subnets to remove the reliance on an Azure Load Balancer or
Distributed Network Name (DNN) to route traffic to your listener. See
Migrate availability group to a multi-subnet environment to learn more.

March 2023
Changes Details

Removed The architecture for the SQL IaaS Agent extension has been updated to
extension remove management modes. All newly deployed SQL Server VMs are
management registered with the extension by using the same default configuration and
modes least privileged security model. To learn more, review Management modes.

February 2023
Changes Details

Enable Azure AD We've published a guide to help you enable Azure AD authentication for
for SQL Server your SQL Server VM. Review Configure Azure AD to learn more.

January 2023
Changes Details

Extend your multi- Extend an existing multi-subnet availability group, either on Azure virtual
subnet AG to machines, or on-premises, to another region in Azure. To learn more,
multiple regions review Multi-subnet availability group in multiple regions.

2022
Changes Details

Troubleshoot SQL We've added an article to help you troubleshoot and address some
IaaS Agent extension known issues with the SQL Server IaaS agent extension. To learn more,
read Troubleshoot known issues.

Configure AG from There is a new experience to deploy an Always On availability group to


Azure portal multiple subnets by using the Azure portal. The new availability group
deployment method replaces the previous deployment through the SQL
virtual machines resource. This feature is currently in preview. To learn
more, review Configure availability group through the Azure portal.

Azure AD It's now possible to configure Azure Active Directory (Azure AD)
authentication authentication to your SQL Server 2022 on Azure VM by using the Azure
portal. This feature is currently in preview. To get started, review Azure
AD with SQL Server VMs.

Least privilege There is a new permissions model available for the SQL Server IaaS
permission model for Agent extension that grants the least privileged permission for each
SQL IaaS Agent feature used by the extension. To learn more, review SQL IaaS Agent
extension extension permissions.

Confidential VMs SQL Server on Azure VMs has added support to deploy to SQL Server on
Azure confidential VMs. To get started, review the Quickstart: Deploy
SQL Server to an Azure confidential VM.

Azure CLI for SQL It's now possible to configure the SQL best practices assessment feature
best practices using the Azure CLI.
assessment

Configure tempdb It's now possible to configure your tempdb settings, such as the number
from Azure portal of files, initial size, and autogrowth ratio for an existing SQL Server
instance by using the Azure portal. See manage SQL Server VM from
portal to learn more.

SDK-style SQL Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
projects Database Projects extension in Azure Data Studio or VS Code. This
feature is currently in preview. To learn more, see SDK-style SQL projects.

Ebdsv5-series The new Ebdsv5-series provides the highest I/O throughput-to-vCore


ratio in Azure along with a memory-to-vCore ratio of 8. This series offers
the best price-performance for SQL Server workloads on Azure VMs.
Consider this series first for most SQL Server workloads. To learn more,
see the updates in VM sizes.

Security best The SQL Server VM security best practices have been rewritten and
practices refreshed!

Migrate with It's now possible to migrate your database(s) from a standalone instance
distributed AG of SQL Server or an entire availability group over to SQL Server on Azure
Changes Details

VMs using a distributed availability group! See the prerequisites to get


started.

Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.

Additional resources
Windows VMs:

Overview of SQL Server on a Windows VM


Provision SQL Server on a Windows VM
Migration guide: SQL Server to SQL Server on Azure Virtual Machines
High availability and disaster recovery for SQL Server on Azure Virtual Machines
Performance best practices for SQL Server on Azure Virtual Machines
Application patterns and development strategies for SQL Server on Azure Virtual
Machines

Linux VMs:

Overview of SQL Server on a Linux VM


Provision SQL Server on a Linux virtual machine
FAQ (Linux)
SQL Server on Linux documentation
What is SQL Server on Windows Azure
Virtual Machines?
Article • 07/24/2023

Applies to: SQL Server on Azure VM

This article provides an overview of SQL Server on Azure Virtual Machines (VMs) on the
Windows platform.

If you're new to SQL Server on Azure VMs, check out the SQL Server on Azure VM
Overview video from our in-depth Azure SQL video series:
https://learn.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-
Overview-4-of-61/player

Overview
SQL Server on Azure Virtual Machines enables you to use full versions of SQL Server in
the cloud without having to manage any on-premises hardware. SQL Server virtual
machines (VMs) also simplify licensing costs when you pay as you go.

Azure virtual machines run in many different geographic regions around the world.
They also offer various machine sizes. The virtual machine image gallery allows you to
create a SQL Server VM with the right version, edition, and operating system. This makes
virtual machines a good option for many different SQL Server workloads.

Feature benefits
When you register your SQL Server on Azure VM with the SQL IaaS Agent extension you
unlock a number of feature benefits. Registering with the extension is completely free.

The following table details the benefits unlocked by the extension:

Feature Description

Portal Unlocks management in the portal, so that you can view all of your SQL
management Server VMs in one place, and enable or disable SQL specific features directly
from the portal.

Included with basic registration.

Automated Automates the scheduling of backups for all databases for either the default
backup instance or a properly installed named instance of SQL Server on the VM. For
more information, see Automated backup for SQL Server in Azure virtual
Feature Description
machines (Resource Manager).

Requires SQL IaaS Agent extension.

Automated Configures a maintenance window during which important Windows and


patching SQL Server security updates to your VM can take place, so you can avoid
updates during peak times for your workload. For more information, see
Automated patching for SQL Server in Azure virtual machines (Resource
Manager).

Requires SQL IaaS Agent extension.

Azure Key Vault Enables you to automatically install and configure Azure Key Vault on your
integration SQL Server VM. For more information, see Configure Azure Key Vault
integration for SQL Server on Azure Virtual Machines (Resource Manager).

Requires SQL IaaS Agent extension.

Flexible licensing Save on cost by seamlessly transitioning from the bring-your-own-license


(also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing
model and back again.

Included with basic registration.

Flexible version / If you decide to change the version or edition of SQL Server, you can update
edition the metadata within the Azure portal without having to redeploy the entire
SQL Server VM.

Included with basic registration.

Configure You can configure your tempdb directly from the Azure portal, such as
tempdb specifying the number of files, their initial size, their location, and the
autogrowth ratio. Restart your SQL Server service for the changes to take
effect.

Requires SQL IaaS Agent extension.

Defender for If you've enabled Microsoft Defender for SQL, then you can view Defender
Cloud portal for Cloud recommendations directly in the SQL virtual machines resource of
integration the Azure portal. See Security best practices to learn more.

Requires SQL IaaS Agent extension.

SQL best practices Enables you to assess the health of your SQL Server VMs using configuration
assessment best practices. For more information, see SQL best practices assessment.

Requires SQL IaaS Agent extension.

View disk Allows you to view a graphical representation of the disk utilization of your
utilization in SQL data files in the Azure portal.
Feature Description

portal
Requires SQL IaaS Agent extension.

Getting started
To get started with SQL Server on Azure VMs, review the following resources:

Create SQL VM: To create your SQL Server on Azure VM, review the Quickstarts
using the Azure portal, Azure PowerShell or an ARM template. For more thorough
guidance, review the Provisioning guide.
Connect to SQL VM: To connect to your SQL Server on Azure VMs, review the ways
to connect.
Migrate data: Migrate your data to SQL Server on Azure VMs from SQL Server,
Oracle, or Db2.
Storage configuration: For information about configuring storage for your SQL
Server on Azure VMs, review Storage configuration.
Performance: Fine-tune the performance of your SQL Server on Azure VM by
reviewing the Performance best practices checklist.
Pricing: For information about the pricing structure of your SQL Server on Azure
VM, review the Pricing guidance.
Frequently asked questions: For commonly asked questions, and scenarios, review
the FAQ.

Videos
For videos about the latest features to optimize SQL Server VM performance and
automate management, review the following Data Exposed videos:

Caching and Storage Capping (Ep. 1)


Automate Management with the SQL Server IaaS Agent extension (Ep. 2)
Use Azure Monitor Metrics to Track VM Cache Health (Ep. 3)
Get the best price-performance for your SQL Server workloads on Azure VM
Using PerfInsights to Evaluate Resource Health and Troubleshoot (Ep. 5)
Best Price-Performance with Ebdsv5 Series (Ep.6)
Optimally Configure SQL Server on Azure Virtual Machines with SQL Assessment
(Ep. 7)
New and Improved SQL Server on Azure VM deployment and management
experience (Ep.8)
High availability & disaster recovery
On top of the built-in high availability provided by Azure virtual machines, you can also
use the high availability and disaster recovery features provided by SQL Server.

To learn more, see the overview of Always On availability groups, and Always On failover
cluster instances. For more details, see the business continuity overview.

To get started, see the tutorials for availability groups or preparing your VM for a
failover cluster instance.

Licensing
To get started, choose a SQL Server virtual machine image with your required version,
edition, and operating system. The following sections provide direct links to the Azure
portal for the SQL Server virtual machine gallery images. Change the licensing model of
a pay-per-usage SQL Server VM to use your own license. For more information, see How
to change the licensing model for a SQL Server VM.

Azure only maintains one virtual machine image for each supported operating system,
version, and edition combination. This means that over time images are refreshed, and
older images are removed. For more information, see the Images section of the SQL
Server VMs FAQ.

 Tip

For more information about how to understand pricing for SQL Server images, see
Pricing guidance for SQL Server on Azure Virtual Machines.

The following table provides a matrix of pay-as-you-go SQL Server images.

Version Operating System

SQL Server 2022 Windows Server 2022

SQL Server 2019 Windows Server 2022 , Windows Server 2019

SQL Server 2017 Windows Server 2019 , Windows Server 2016

SQL Server 2016 Windows Server 2019 , Windows Server 2016

SQL Server 2014 Windows Server 2012 R2

SQL Server 2012 Windows Server 2012 R2


7 Note

SQL Server 2008 and SQL Server 2008 R2 are out of extended support and no
longer available from the Azure Marketplace.

To see the available SQL Server on Linux virtual machine images, see Overview of SQL
Server on Azure Virtual Machines (Linux).

It's possible to deploy an older image of SQL Server that isn't available in the Azure
portal by using PowerShell. To view all available images by using PowerShell, use the
following command:

PowerShell

Get-AzVMImageOffer -Location $Location -Publisher 'MicrosoftSQLServer'

For more information about deploying SQL Server VMs using PowerShell, view How to
provision SQL Server virtual machines with Azure PowerShell.

) Important

Older images might be outdated. Remember to apply all SQL Server and Windows
updates before using them for production.

Customer experience improvement program


(CEIP)
The Customer Experience Improvement Program (CEIP) is enabled by default. This
periodically sends reports to Microsoft to help improve SQL Server. There's no
management task required with CEIP unless you want to disable it after provisioning.
You can customize or disable the CEIP by connecting to the VM with remote desktop.
Then run the SQL Server Error and Usage Reporting utility. Follow the instructions to
disable reporting. For more information about data collection, see the SQL Server
Privacy Statement.

Related products and services


Since SQL Server on Azure VMs is integrated into the Azure platform, review resources
from related products and services that interact with the SQL Server on Azure VM
ecosystem:

Windows virtual machines: Azure Virtual Machines overview


Storage: Introduction to Microsoft Azure Storage
Networking: Virtual Network overview, IP addresses in Azure, Create a Fully
Qualified Domain Name in the Azure portal
SQL: SQL Server documentation, Azure SQL Database comparison

Next steps
Get started with SQL Server on Azure Virtual Machines:

Create a SQL Server VM in the Azure portal

Get answers to commonly asked questions about SQL Server VMs:

SQL Server on Azure Virtual Machines FAQ

View Reference Architectures for running N-tier applications on SQL Server in IaaS

Windows N-tier application on Azure with SQL Server


Run an N-tier application in multiple Azure regions for high availability
Automate management with the
Windows SQL Server IaaS Agent
extension
Article • 03/26/2023

Applies to:
SQL Server on Azure VM

The SQL Server IaaS Agent extension (SqlIaasExtension) runs on SQL Server on Azure
Windows Virtual Machines (VMs) to automate management and administration tasks.

This article provides an overview of the extension. To install the SQL Server IaaS Agent
extension to SQL Server on Azure VMs, see the articles for Automatic registration,
Register single VMs, or Register VMs in bulk.

7 Note

Management modes have been removed! Learn more

To learn more about the Azure VM deployment and management experience, including
recent improvements, see:

Azure SQL VM: Automate Management with the SQL Server IaaS Agent extension
(Ep. 2)
Azure SQL VM: New and Improved SQL on Azure VM deployment and
management experience (Ep.8) | Data Exposed.

Overview
The SQL Server IaaS Agent extension allows for integration with the Azure portal, and
unlocks a number of benefits for SQL Server on Azure VMs:

Feature benefits: The extension unlocks a number of automation feature benefits,


such as portal management, license flexibility, automated backup, automated
patching and more. See Feature benefits later in this article for details.

Compliance: The extension offers a simplified method to fulfill the requirement of


notifying Microsoft that the Azure Hybrid Benefit has been enabled as is specified
in the product terms. This process negates needing to manage licensing
registration forms for each resource.
Free: The extension is completely free. There's no additional cost associated with
the extension.

Integration with centrally managed Azure Hybrid Benefit: SQL Server VMs
registered with the extension can integrate with Centrally managed Azure Hybrid
Benefit, making it easy manage the Azure Hybrid Benefit for your SQL Server VMs
at scale.

Simplified license management: The extension simplifies SQL Server license


management, and allows you to quickly identify SQL Server VMs with the Azure
Hybrid Benefit enabled using:

Azure portal

You can use the SQL virtual machines resource in the Azure portal to quickly
identify SQL Server VMs that are using the Azure Hybrid Benefit.

Enable auto upgrade to ensure you're getting the latest updates to the extension each
month.

Management modes
Prior to March 2023, the SQL IaaS Agent extension relied on management modes to
define the security model, and unlock feature benefits. In March 2023, the extension
architecture was updated to remove management modes entirely, instead relying on the
principle of least privilege to give customers control over how they want to use the
extension on a feature-by-feature basis.

Starting in March 2023, when you first register with the extension, binaries are saved to
your virtual machine to provide you with basic functionality such as license
management. Once you enable any feature that relies on the agent, the binaries are
used to install the SQL IaaS Agent to your virtual machine, and permissions are assigned
to the SQL IaaS Agent service as needed by each feature that you enable.

Feature benefits
The SQL Server IaaS Agent extension unlocks a number of feature benefits for managing
your SQL Server VM, letting you pick and choose which benefit suits your business
needs. When you first register with the extension, the functionality is limited to a few
features that don't rely on the SQL IaaS Agent. Once you enable a feature that requires
it, the agent is installed to the SQL Server VM.
The following table details the benefits available through the SQL IaaS Agent extension,
and whether or not the agent is required:

Feature Description

Portal Unlocks management in the portal, so that you can view all of your SQL Server
management VMs in one place, and enable or disable SQL specific features directly from the
portal.

Included with basic registration.

Automated Automates the scheduling of backups for all databases for either the default
backup instance or a properly installed named instance of SQL Server on the VM. For
more information, see Automated backup for SQL Server in Azure virtual
machines (Resource Manager).

Requires SQL IaaS Agent extension.

Automated Configures a maintenance window during which important Windows and SQL
patching Server security updates to your VM can take place, so you can avoid updates
during peak times for your workload. For more information, see Automated
patching for SQL Server in Azure virtual machines (Resource Manager).

Requires SQL IaaS Agent extension.

Azure Key Enables you to automatically install and configure Azure Key Vault on your SQL
Vault Server VM. For more information, see Configure Azure Key Vault integration for
integration SQL Server on Azure Virtual Machines (Resource Manager).

Requires SQL IaaS Agent extension.

Flexible Save on cost by seamlessly transitioning from the bring-your-own-license (also


licensing known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and
back again.

Included with basic registration.

Flexible If you decide to change the version or edition of SQL Server, you can update the
version / metadata within the Azure portal without having to redeploy the entire SQL
edition Server VM.

Included with basic registration.

Configure You can configure your tempdb directly from the Azure portal, such as specifying
tempdb the number of files, their initial size, their location, and the autogrowth ratio.
Restart your SQL Server service for the changes to take effect.

Requires SQL IaaS Agent extension.


Feature Description

Defender for If you've enabled Microsoft Defender for SQL, then you can view Defender for
Cloud portal Cloud recommendations directly in the SQL virtual machines resource of the
integration Azure portal. See Security best practices to learn more.

Requires SQL IaaS Agent extension.

SQL best Enables you to assess the health of your SQL Server VMs using configuration best
practices practices. For more information, see SQL best practices assessment.

assessment
Requires SQL IaaS Agent extension.

View disk Allows you to view a graphical representation of the disk utilization of your SQL
utilization in data files in the Azure portal.

portal
Requires SQL IaaS Agent extension.

Permissions models
There are two permission models for the SQL Server IaaS Agent extension - either full
sysadmin rights, or the principle of least privilege. The least privileged permission model
grants the minimum permissions required for each feature that you enable. Each feature
that you use is assigned a custom role in SQL Server, and the custom role is only
granted permissions that are required to perform actions related to the feature.

The principle of least privilege model is enabled by default for SQL Server VMs deployed
via Azure Marketplace after October 2022. Existing SQL Server VMs deployed prior to
this date, or VMs with self-installed SQL Server instances, use the sysadmin model by
default and can enable the least privileged permissions model in the Azure portal.

To enable the least privilege permissions model, go to your SQL virtual machines
resource, choose Additional features under Settings and then check the box next to
SQL IaaS Agent extension least privilege mode:
The following table defines the permissions and custom roles used by each feature of
the extension:

Feature Permissions Custom role (Server / DB)

SQL best Server permission - CONTROL SERVER SqlIaaSExtension_Assessment


practices
assessment

Automated Server permission - CONTROL SERVER SqlIaaSExtension_AutoBackup


backups Database permission - db_ddladmin on
master, db_backupoperator on msdb

Azure Backup sysadmin


Service

Credential Server permission - CONTROL SERVER SqlIaaSExtension_CredentialMgmt


management

Availability group sysadmin


portal
management

R Service Server permission - ALTER SETTINGS SqlIaaSExtension_RService

SQL sysadmin
authentication

SQL Server Server permission - ALTER ANY LOGIN, SqlIaaSExtension_SqlInstanceSetting


instance settings ALTER SETTINGS

Storage Server permission - ALTER ANY SqlIaaSExtension_StorageConfig


configuration DATABASE

Status reporting Server permission - VIEW ANY SqlIaaSExtension_StatusReporting


DEFINITION, VIEW SERVER STATE,
ALTER ANY LOGIN, CONNECT SQL
Installation
When you register your SQL Server VM with the SQL IaaS Agent extension, binaries are
copied to the VM. Once you enable a feature that relies on it, the SQL IaaS Agent
extension is installed to the VM and has access to SQL Server. By default, the agent
follows the model of least privilege, and only has permissions within SQL Server that are
associated with the features that you enable - unless you manually installed SQL Server
to the VM yourself, or deployed a SQL Server image from the marketplace prior to
October 2022, in which case the agent has sysadmin rights within SQL Server.

Deploying a SQL Server VM Azure Marketplace image through the Azure portal
automatically registers the SQL Server VM with the extension. However, if you choose to
self-install SQL Server on an Azure virtual machine, or provision an Azure virtual
machine from a custom VHD, then you must register your SQL Server VM with the SQL
IaaS Agent extension to unlock feature benefits. By default, self-installed Azure VMs with
SQL Server 2016 or later are automatically registered with the SQL IaaS Agent extension
when detected by the CEIP service. SQL Server VMs not detected by the CEIP should be
manually registered.

When you register with the SQL IaaS Agent extension, binaries are copied to the virtual
machine, but the agent is not installed by default. The agent will only be installed when
you enable one of the features that require it, and the following two services will then
run on the virtual machine:

Microsoft SQL Server IaaS agent is the main service for the SQL IaaS Agent
extension and should run under the Local System account.
Microsoft SQL Server IaaS Query Service is a helper service that helps the
extension run queries within SQL Server and should run under the NT Service
account NT Service\SqlIaaSExtensionQuery .

There are three ways to register with the extension:

Automatically for all current and future VMs in a subscription


Manually for a single VM
Manually for multiple VMs in bulk

Registering your SQL Server VM with the SQL Server IaaS Agent extension creates the
SQL virtual machine resource within your subscription, which is a separate resource from
the virtual machine resource. Unregistering your SQL Server VM from the extension
removes the SQL virtual machine resource from your subscription but won't drop the
underlying virtual machine.
Multiple instance support
The SQL IaaS Agent extension only works on virtual machines with multiple instances if
there is a default instance. When you register your virtual machine with the SQL IaaS
Agent extension, it registers the default instance, and that's the instance you'll be able
to manage from the Azure portal.

The SQL IaaS Agent extension does not support virtual machines with multiple named
instances if there is no default instance.

Named instance support


The SQL IaaS Agent extension works with a named instance of SQL Server if it's the only
SQL Server instance available on the virtual machine. The SQL IaaS Agent extension does
not support VMs with multiple named instances.

To use a named instance of SQL Server, deploy an Azure virtual machine, install a single
named SQL Server instance to it, and then register it with the SQL IaaS Agent extension.

Alternatively, to use a named instance with an Azure Marketplace SQL Server image,
follow these steps:

1. Deploy a SQL Server VM from Azure Marketplace.


2. Unregister the SQL Server VM from the SQL IaaS Agent extension.
3. Uninstall SQL Server completely within the SQL Server VM.
4. Restart the virtual machine.
5. Install SQL Server with a named instance within the SQL Server VM.
6. Restart the virtual machine.
7. Register the VM with the SQL IaaS Agent Extension.

Failover Clustered Instance support


Registering your SQL Server Failover Clustered Instance (FCI) is supported with limited
functionality. Due to the limited functionality, SQL Server FCIs registered with the
extension do not support features that require the agent, such as automated backup,
patching, and advanced portal management.

If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.

Verify status of extension


Use the Azure portal, Azure PowerShell or the Azure CLI to check the status of the
extension.

Azure portal

Verify the extension is installed in the Azure portal.

Go to your Virtual machine resource in the Azure portal (not the SQL virtual
machines resource, but the resource for your VM). Select Extensions under Settings.
You should see the SqlIaasExtension extension listed, as in the following example:

Limitations
The SQL IaaS Agent extension only supports:

SQL Server VMs deployed through the Azure Resource Manager. SQL Server VMs
deployed through the classic model aren't supported.
SQL Server VMs deployed to the public or Azure Government cloud. Deployments
to other private or government clouds aren't supported.
SQL Server FCIs with limited functionality. SQL Server FCIs registered with the
extension do not support features that require the agent, such as automated
backup, patching, and advanced portal management.
VMs with a single named instance, or VMs with multiple named instances, if a
default instance exists.
SQL Server instance images only. The SQL IaaS Agent extension does not support
Reporting Services or Analysis services, such as the following images: SQL Server
Reporting Services, Power BI Report Server, SQL Server Analysis Services.

Privacy statements
When using SQL Server on Azure VMs and the SQL IaaS Agent extension, consider the
following privacy statements:

Automatic registration: By default, Azure VMs with SQL Server 2016 or later are
automatically registered with the SQL IaaS Agent extension when detected by the
CEIP service. Review the SQL Server privacy supplement for more information.

Data collection: The SQL IaaS Agent extension collects data for the express
purpose of giving customers optional benefits when using SQL Server on Azure
Virtual Machines. Microsoft will not use this data for licensing audits without the
customer's advance consent. See the SQL Server privacy supplement for more
information.

In-region data residency: SQL Server on Azure VMs and the SQL IaaS Agent
extension don't move or store customer data out of the region in which the VMs
are deployed.

Next steps
To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the articles for
Automatic installation, Single VMs, or VMs in bulk. For problem resolution, read
Troubleshoot known issues with the extension.

To learn more, review the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Azure VMs
What's new for SQL Server on Azure VMs
Quickstart: Create SQL Server on a
Windows virtual machine in the Azure
portal
Article • 03/03/2023

Applies to:
SQL Server on Azure VM

This quickstart steps through creating a SQL Server virtual machine (VM) in the Azure
portal. Follow the article to deploy either a conventional SQL Server on Azure VM, or
SQL Server deployed to an Azure confidential VM.

 Tip

This quickstart provides a path for quickly provisioning and connecting to a


SQL VM. For more information about other SQL VM provisioning choices, see
the Provisioning guide for SQL Server on Windows VM in the Azure portal.
If you have questions about SQL Server virtual machines, see the Frequently
Asked Questions.

Get an Azure subscription


If you don't have an Azure subscription, create a free account before you begin.

Select a SQL Server VM image


1. Sign in to the Azure portal using your account.

2. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in
the list, select All services, then type Azure SQL in the search box.

3. Select +Add to open the Select SQL deployment option page. You can view
additional information by selecting Show details on the SQL virtual machines tile.

4. For conventional SQL Server VMs, select one of the versions labeled Free SQL
Server License... from the drop-down. For confidential VMs, choose the SQL Server
2019 Enterprise on Windows Server 2022 Database Engine Only image from the
drop-down.
5. Select Create.

Provide basic details


The instructions for basic details vary between deploying a conventional SQL Server on
Azure VM, and SQL Server on an Azure confidential VM.

Conventional VM

To deploy a conventional SQL Server on Azure VM, on the Basics tab, provide the
following information:
1. In the Project Details section, select your Azure subscription and then select
Create new to create a new resource group. Type SQLVM-RG for the name.

2. Under Instance details:


a. Type SQLVM for the Virtual machine name.
b. Choose a location for your Region.
c. For the purpose of this quickstart, leave Availability options set to No
infrastructure redundancy required. To find out more information about
availability options, see Availability.
d. In the Image list, select the image with the version of SQL Server and
operating system you want. For example, you can use an image with a label
that begins with Free SQL Server License:.
e. Choose to Change size for the Size of the virtual machine and select the A2
Basic offering. Be sure to clean up your resources once you're done with
them to prevent any unexpected charges.

3. Under Administrator account, provide a username, such as azureuser and a


password. The password must be at least 12 characters long and meet the
defined complexity requirements.
4. Under Inbound port rules, choose Allow selected ports and then select RDP
(3389) from the drop-down.

SQL Server settings


On the SQL Server settings tab, configure the following options:

1. Under Security & Networking, select Public (Internet) for SQL Connectivity and
change the port to 1401 to avoid using a well-known port number in the public
scenario.

2. Under SQL Authentication, select Enable. The SQL login credentials are set to the
same user name and password that you configured for the VM. Use the default
setting for Azure Key Vault integration. Storage configuration is not available for
the basic SQL Server VM image, but you can find more information about available
options for other images at storage configuration.
3. Change any other settings if needed, and then select Review + create.
Create the SQL Server VM
On the Review + create tab, review the summary, and select Create to create SQL
Server, resource group, and resources specified for this VM.

You can monitor the deployment from the Azure portal. The Notifications button at the
top of the screen shows basic status of the deployment. Deployment can take several
minutes.

Connect to SQL Server


1. In the portal, find the Public IP address of your SQL Server VM in the Overview
section of your virtual machine's properties.

2. On a different computer connected to the Internet, open SQL Server Management


Studio (SSMS).

3. In the Connect to Server or Connect to Database Engine dialog box, edit the
Server name value. Enter your VM's public IP address. Then add a comma and add
the custom port (1401) that you specified when you configured the new VM. For
example, 11.22.33.444,1401 .

4. In the Authentication box, select SQL Server Authentication.

5. In the Login box, type the name of a valid SQL login.

6. In the Password box, type the password of the login.

7. Select Connect.
Log in to the VM remotely
Use the following steps to connect to the SQL Server virtual machine with Remote
Desktop:

1. After the Azure virtual machine is created and running, select Virtual machine, and
then choose your new VM.

2. Select Connect and then choose RDP from the drop-down to download your RDP
file.

3. Open the RDP file that your browser downloads for the VM.
4. The Remote Desktop Connection notifies you that the publisher of this remote
connection cannot be identified. Click Connect to continue.

5. In the Windows Security dialog, click Use a different account. You might have to
click More choices to see this. Specify the user name and password that you
configured when you created the VM. You must add a backslash before the user
name.

6. Click OK to connect.

After you connect to the SQL Server virtual machine, you can launch SQL Server
Management Studio and connect with Windows Authentication using your local
administrator credentials. If you enabled SQL Server Authentication, you can also
connect with SQL Authentication using the SQL login and password you configured
during provisioning.

Access to the machine enables you to directly change machine and SQL Server settings
based on your requirements. For example, you could configure the firewall settings or
change SQL Server configuration settings.

Clean up resources
If you do not need your SQL VM to run continually, you can avoid unnecessary charges
by stopping it when not in use. You can also permanently delete all resources associated
with the virtual machine by deleting its associated resource group in the portal. This
permanently deletes the virtual machine as well, so use this command with care. For
more information, see Manage Azure resources through portal.

Next steps
In this quickstart, you created a SQL Server virtual machine in the Azure portal. To learn
more about how to migrate your data to the new SQL Server, see the following article.

Migration guide: SQL Server to SQL Server on Azure Virtual Machines


Quickstart: Create SQL Server on a
Windows virtual machine with Azure
PowerShell
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

This quickstart steps through creating a SQL Server virtual machine (VM) with Azure
PowerShell.

 Tip

This quickstart provides a path for quickly provisioning and connecting to a


SQL VM. For more information about other Azure PowerShell options for
creating SQL VMs, see the Provisioning guide for SQL Server VMs with Azure
PowerShell.
If you have questions about SQL Server virtual machines, see the Frequently
Asked Questions.

Get an Azure subscription


If you don't have an Azure subscription, create a free account before you begin.

Get Azure PowerShell

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

Configure PowerShell
1. Open PowerShell and establish access to your Azure account by running the
Connect-AzAccount command.

PowerShell

Connect-AzAccount

2. When you see the sign-in window, enter your credentials. Use the same email and
password that you use to sign in to the Azure portal.

Create a resource group


1. Define a variable with a unique resource group name. To simplify the rest of the
quickstart, the remaining commands use this name as a basis for other resource
names.

PowerShell

$ResourceGroupName = "sqlvm1"

2. Define a location of a target Azure region for all VM resources.

PowerShell

$Location = "East US"

3. Create the resource group.

PowerShell

New-AzResourceGroup -Name $ResourceGroupName -Location $Location

Configure network settings


1. Create a virtual network, subnet, and a public IP address. These resources are used
to provide network connectivity to the virtual machine and connect it to the
internet.

PowerShell

$SubnetName = $ResourceGroupName + "subnet"

$VnetName = $ResourceGroupName + "vnet"

$PipName = $ResourceGroupName + $(Get-Random)

# Create a subnet configuration

$SubnetConfig = New-AzVirtualNetworkSubnetConfig -Name $SubnetName -


AddressPrefix 192.168.1.0/24

# Create a virtual network

$Vnet = New-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -


Location $Location `

-Name $VnetName -AddressPrefix 192.168.0.0/16 -Subnet $SubnetConfig

# Create a public IP address and specify a DNS name

$Pip = New-AzPublicIpAddress -ResourceGroupName $ResourceGroupName -


Location $Location `

-AllocationMethod Static -IdleTimeoutInMinutes 4 -Name $PipName

2. Create a network security group. Configure rules to allow remote desktop (RDP)
and SQL Server connections.

PowerShell

# Rule to allow remote desktop (RDP)

$NsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name "RDPRule" -Protocol


Tcp `

-Direction Inbound -Priority 1000 -SourceAddressPrefix * -


SourcePortRange * `

-DestinationAddressPrefix * -DestinationPortRange 3389 -Access Allow

#Rule to allow SQL Server connections on port 1433

$NsgRuleSQL = New-AzNetworkSecurityRuleConfig -Name "MSSQLRule" -


Protocol Tcp `

-Direction Inbound -Priority 1001 -SourceAddressPrefix * -


SourcePortRange * `

-DestinationAddressPrefix * -DestinationPortRange 1433 -Access Allow

# Create the network security group

$NsgName = $ResourceGroupName + "nsg"

$Nsg = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroupName


`

-Location $Location -Name $NsgName `

-SecurityRules $NsgRuleRDP,$NsgRuleSQL

3. Create the network interface.

PowerShell

$InterfaceName = $ResourceGroupName + "int"

$Interface = New-AzNetworkInterface -Name $InterfaceName `

-ResourceGroupName $ResourceGroupName -Location $Location `

-SubnetId $VNet.Subnets[0].Id -PublicIpAddressId $Pip.Id `

-NetworkSecurityGroupId $Nsg.Id

Create the SQL VM


1. Define your credentials to sign in to the VM. The username is "azureadmin". Make
sure you change <password> before running the command.

PowerShell

# Define a credential object

$SecurePassword = ConvertTo-SecureString '<password>' `

-AsPlainText -Force

$Cred = New-Object System.Management.Automation.PSCredential


("azureadmin", $securePassword)

2. Create a virtual machine configuration object and then create the VM. The
following command creates a SQL Server 2017 Developer Edition VM on Windows
Server 2016.

PowerShell

# Create a virtual machine configuration

$VMName = $ResourceGroupName + "VM"

$VMConfig = New-AzVMConfig -VMName $VMName -VMSize Standard_DS13_V2 |

Set-AzVMOperatingSystem -Windows -ComputerName $VMName -Credential


$Cred -ProvisionVMAgent -EnableAutoUpdate |

Set-AzVMSourceImage -PublisherName "MicrosoftSQLServer" -Offer


"SQL2017-WS2016" -Skus "SQLDEV" -Version "latest" |

Add-AzVMNetworkInterface -Id $Interface.Id

# Create the VM

New-AzVM -ResourceGroupName $ResourceGroupName -Location $Location -VM


$VMConfig

 Tip

It takes several minutes to create the VM.

Register with SQL VM RP


To get portal integration and SQL VM features, you must register with the SQL IaaS
Agent extension.

Remote desktop into the VM


1. Use the following command to retrieve the public IP address for the new VM.

PowerShell

Get-AzPublicIpAddress -ResourceGroupName $ResourceGroupName | Select


IpAddress

2. Pass the returned IP address as a command-line parameter to mstsc to start a


Remote Desktop session into the new VM.

mstsc /v:<publicIpAddress>

3. When prompted for credentials, choose to enter credentials for a different account.
Enter the username with a preceding backslash (for example, \azureadmin ), and
the password that you set previously in this quickstart.

Connect to SQL Server


1. After signing in to the Remote Desktop session, launch SQL Server Management
Studio 2017 from the start menu.

2. In the Connect to Server dialog box, keep the defaults. The server name is the
name of the VM. Authentication is set to Windows Authentication. Select
Connect.

You're now connected to SQL Server locally. If you want to connect remotely, you must
configure connectivity from the Azure portal or manually.

Clean up resources
If you don't need the VM to run continuously, you can avoid unnecessary charges by
stopping it when not in use. The following command stops the VM but leaves it
available for future use.

PowerShell

Stop-AzVM -Name $VMName -ResourceGroupName $ResourceGroupName

You can also permanently delete all resources associated with the virtual machine with
the Remove-AzResourceGroup command. Doing so permanently deletes the virtual
machine as well, so use this command with care.

Next steps
In this quickstart, you created a SQL Server 2017 virtual machine using Azure PowerShell.
To learn more about how to migrate your data to the new SQL Server, see the following
article.

Migration guide: SQL Server to SQL Server on Azure Virtual Machines


Quickstart: Create SQL Server VM using
Bicep
Article • 03/30/2023

This quickstart shows you how to use Bicep to create an SQL Server on Azure Virtual
Machine (VM).

Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure
resources. It provides concise syntax, reliable type safety, and support for code reuse.
Bicep offers the best authoring experience for your infrastructure-as-code solutions in
Azure.

Prerequisites
The SQL Server VM Bicep file requires the following:

The latest version of the Azure CLI and/or PowerShell.


A pre-configured resource group with a prepared virtual network and subnet.
An Azure subscription. If you don't have one, create a free account before you
begin.

Review the Bicep file


The Bicep file used in this quickstart is from Azure Quickstart Templates .

Bicep

@description('The name of the VM')

param virtualMachineName string = 'myVM'

@description('The virtual machine size.')

param virtualMachineSize string = 'Standard_D8s_v3'

@description('Specify the name of an existing VNet in the same resource


group')

param existingVirtualNetworkName string

@description('Specify the resrouce group of the existing VNet')

param existingVnetResourceGroup string = resourceGroup().name

@description('Specify the name of the Subnet Name')

param existingSubnetName string

@description('Windows Server and SQL Offer')

@allowed([

'sql2019-ws2019'

'sql2017-ws2019'

'sql2019-ws2022'

'SQL2016SP1-WS2016'

'SQL2016SP2-WS2016'

'SQL2014SP3-WS2012R2'

'SQL2014SP2-WS2012R2'

])

param imageOffer string = 'sql2019-ws2022'

@description('SQL Server Sku')

@allowed([

'standard-gen2'

'enterprise-gen2'

'SQLDEV-gen2'

'web-gen2'

'enterprisedbengineonly-gen2'

])

param sqlSku string = 'standard-gen2'

@description('The admin user name of the VM')

param adminUsername string

@description('The admin password of the VM')

@secure()

param adminPassword string

@description('SQL Server Workload Type')

@allowed([

'General'

'OLTP'

'DW'

])

param storageWorkloadType string = 'General'

@description('Amount of data disks (1TB each) for SQL Data files')

@minValue(1)

@maxValue(8)

param sqlDataDisksCount int = 1

@description('Path for SQL Data files. Please choose drive letter from F to
Z, and other drives from A to E are reserved for system')

param dataPath string = 'F:\\SQLData'

@description('Amount of data disks (1TB each) for SQL Log files')

@minValue(1)

@maxValue(8)

param sqlLogDisksCount int = 1

@description('Path for SQL Log files. Please choose drive letter from F to Z
and different than the one used for SQL data. Drive letter from A to E are
reserved for system')

param logPath string = 'G:\\SQLLog'

@description('Location for all resources.')

param location string = resourceGroup().location

@description('Security Type of the Virtual Machine.')

@allowed([

'Standard'

'TrustedLaunch'

])

param securityType string = 'TrustedLaunch'

var securityProfileJson = {

uefiSettings: {

secureBootEnabled: true

vTpmEnabled: true

securityType: securityType

var networkInterfaceName = '${virtualMachineName}-nic'

var networkSecurityGroupName = '${virtualMachineName}-nsg'

var networkSecurityGroupRules = [

name: 'RDP'

properties: {

priority: 300

protocol: 'Tcp'

access: 'Allow'

direction: 'Inbound'

sourceAddressPrefix: '*'

sourcePortRange: '*'

destinationAddressPrefix: '*'

destinationPortRange: '3389'

var publicIpAddressName = '${virtualMachineName}-


publicip-${uniqueString(virtualMachineName)}'

var publicIpAddressType = 'Dynamic'

var publicIpAddressSku = 'Basic'

var diskConfigurationType = 'NEW'

var nsgId = networkSecurityGroup.id

var subnetRef = resourceId(existingVnetResourceGroup,


'Microsoft.Network/virtualNetWorks/subnets', existingVirtualNetworkName,
existingSubnetName)

var dataDisksLuns = range(0, sqlDataDisksCount)

var logDisksLuns = range(sqlDataDisksCount, sqlLogDisksCount)

var dataDisks = {

createOption: 'Empty'

caching: 'ReadOnly'

writeAcceleratorEnabled: false

storageAccountType: 'Premium_LRS'

diskSizeGB: 1023

var tempDbPath = 'D:\\SQLTemp'

var extensionName = 'GuestAttestation'

var extensionPublisher = 'Microsoft.Azure.Security.WindowsAttestation'

var extensionVersion = '1.0'

var maaTenantName = 'GuestAttestation'

resource publicIpAddress 'Microsoft.Network/publicIPAddresses@2022-01-01' =


{

name: publicIpAddressName

location: location

sku: {

name: publicIpAddressSku

properties: {

publicIPAllocationMethod: publicIpAddressType

resource networkSecurityGroup 'Microsoft.Network/networkSecurityGroups@2022-


01-01' = {

name: networkSecurityGroupName

location: location

properties: {

securityRules: networkSecurityGroupRules

resource networkInterface 'Microsoft.Network/networkInterfaces@2022-01-01' =


{

name: networkInterfaceName

location: location

properties: {

ipConfigurations: [

name: 'ipconfig1'

properties: {

subnet: {

id: subnetRef

privateIPAllocationMethod: 'Dynamic'

publicIPAddress: {

id: publicIpAddress.id

enableAcceleratedNetworking: true

networkSecurityGroup: {

id: nsgId

resource virtualMachine 'Microsoft.Compute/virtualMachines@2022-03-01' = {

name: virtualMachineName

location: location

properties: {

hardwareProfile: {

vmSize: virtualMachineSize

storageProfile: {

dataDisks: [for j in range(0, length(range(0, (sqlDataDisksCount +


sqlLogDisksCount)))): {
lun: range(0, (sqlDataDisksCount + sqlLogDisksCount))[j]

createOption: dataDisks.createOption

caching: ((range(0, (sqlDataDisksCount + sqlLogDisksCount))[j] >=


sqlDataDisksCount) ? 'None' : dataDisks.caching)

writeAcceleratorEnabled: dataDisks.writeAcceleratorEnabled

diskSizeGB: dataDisks.diskSizeGB

managedDisk: {

storageAccountType: dataDisks.storageAccountType

}]

osDisk: {

createOption: 'FromImage'

managedDisk: {

storageAccountType: 'Premium_LRS'

imageReference: {
publisher: 'MicrosoftSQLServer'

offer: imageOffer

sku: sqlSku

version: 'latest'

networkProfile: {

networkInterfaces: [

id: networkInterface.id

osProfile: {

computerName: virtualMachineName

adminUsername: adminUsername

adminPassword: adminPassword

windowsConfiguration: {

enableAutomaticUpdates: true

provisionVMAgent: true

securityProfile: ((securityType == 'TrustedLaunch') ?


securityProfileJson : null)

resource virtualMachineName_extension
'Microsoft.Compute/virtualMachines/extensions@2022-03-01' = if
((securityType == 'TrustedLaunch') &&
((securityProfileJson.uefiSettings.secureBootEnabled == true) &&
(securityProfileJson.uefiSettings.vTpmEnabled == true))) {

parent: virtualMachine

name: extensionName

location: location

properties: {

publisher: extensionPublisher

type: extensionName

typeHandlerVersion: extensionVersion

autoUpgradeMinorVersion: true

enableAutomaticUpgrade: true

settings: {

AttestationConfig: {

MaaSettings: {

maaEndpoint: ''

maaTenantName: maaTenantName

AscSettings: {

ascReportingEndpoint: ''

ascReportingFrequency: ''

useCustomToken: 'false'

disableAlerts: 'false'

resource Microsoft_SqlVirtualMachine_sqlVirtualMachines_virtualMachine
'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-07-01-preview' = {

name: virtualMachineName

location: location

properties: {

virtualMachineResourceId: virtualMachine.id

sqlManagement: 'Full'

sqlServerLicenseType: 'PAYG'

storageConfigurationSettings: {

diskConfigurationType: diskConfigurationType

storageWorkloadType: storageWorkloadType

sqlDataSettings: {

luns: dataDisksLuns

defaultFilePath: dataPath
}

sqlLogSettings: {
luns: logDisksLuns

defaultFilePath: logPath

sqlTempDbSettings: {

defaultFilePath: tempDbPath

output adminUsername string = adminUsername

Five Azure resources are defined in the Bicep file:

Microsoft.Network/publicIpAddresses: Creates a public IP address.


Microsoft.Network/networkSecurityGroups: Creates a network security group.
Microsoft.Network/networkInterfaces: Configures the network interface.
Microsoft.Compute/virtualMachines: Creates a virtual machine in Azure.
Microsoft.SqlVirtualMachine/SqlVirtualMachines: registers the virtual machine with
the SQL IaaS Agent extension.

Deploy the Bicep file


1. Save the Bicep file as main.bicep to your local computer.

2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.

CLI

Azure CLI

az deployment group create --resource-group exampleRG --template-


file main.bicep --parameters existingSubnetName=<subnet-name>
adminUsername=<admin-user> adminPassword=<admin-pass>

Make sure to replace the resource group name, exampleRG, with the name of your pre-
configured resource group.

You're required to enter the following parameters:

existingSubnetName: Replace <subnet-name> with the name of the subnet.


adminUsername: Replace <admin-user> with the admin username of the VM.

You'll also be prompted to enter adminPassword.

7 Note

When the deployment finishes, you should see a message indicating the
deployment succeeded.

Review deployed resources


Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in
the resource group.

CLI

Azure CLI

az resource list --resource-group exampleRG

Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete
the resource group and its resources.

CLI

Azure CLI

az group delete --name exampleRG

Next steps
For a step-by-step tutorial that guides you through the process of creating a Bicep file
with Visual Studio Code, see:

Quickstart: Create Bicep files with Visual Studio Code

For other ways to deploy a SQL Server VM, see:

Azure portal
PowerShell

To learn more, see an overview of SQL Server on Azure VMs.


Quickstart: Create SQL Server VM using
an ARM template
Article • 03/30/2023

Use this Azure Resource Manager template (ARM template) to deploy a SQL Server on
Azure Virtual Machine (VM).

An ARM template is a JavaScript Object Notation (JSON) file that defines the
infrastructure and configuration for your project. The template uses declarative syntax.
In declarative syntax, you describe your intended deployment without writing the
sequence of programming commands to create the deployment.

If your environment meets the prerequisites and you're familiar with using ARM
templates, select the Deploy to Azure button. The template will open in the Azure
portal.

Prerequisites
The SQL Server VM ARM template requires the following:

The latest version of the Azure CLI and/or PowerShell.


A preconfigured resource group with a prepared virtual network and subnet.
An Azure subscription. If you don't have one, create a free account before you
begin.

Review the template


The template used in this quickstart is from Azure Quickstart Templates .

JSON

"$schema": "https://schema.management.azure.com/schemas/2019-04-
01/deploymentTemplate.json#",

"contentVersion": "1.0.0.0",

"metadata": {

"_generator": {

"name": "bicep",

"version": "0.17.1.54307",

"templateHash": "3407567292495018002"

},

"parameters": {

"virtualMachineName": {

"type": "string",
"defaultValue": "myVM",

"metadata": {

"description": "The name of the VM"

},

"virtualMachineSize": {

"type": "string",
"defaultValue": "Standard_D8s_v3",

"metadata": {

"description": "The virtual machine size."

},

"existingVirtualNetworkName": {

"type": "string",
"metadata": {

"description": "Specify the name of an existing VNet in the same


resource group"

},

"existingVnetResourceGroup": {

"type": "string",
"defaultValue": "[resourceGroup().name]",

"metadata": {

"description": "Specify the resrouce group of the existing VNet"

},

"existingSubnetName": {

"type": "string",
"metadata": {

"description": "Specify the name of the Subnet Name"

},

"imageOffer": {

"type": "string",
"defaultValue": "sql2019-ws2022",

"allowedValues": [

"sql2019-ws2019",

"sql2017-ws2019",

"sql2019-ws2022",

"SQL2016SP1-WS2016",

"SQL2016SP2-WS2016",

"SQL2014SP3-WS2012R2",

"SQL2014SP2-WS2012R2"

],

"metadata": {

"description": "Windows Server and SQL Offer"

},

"sqlSku": {

"type": "string",
"defaultValue": "standard-gen2",

"allowedValues": [

"standard-gen2",

"enterprise-gen2",

"SQLDEV-gen2",

"web-gen2",

"enterprisedbengineonly-gen2"

],

"metadata": {

"description": "SQL Server Sku"

},

"adminUsername": {

"type": "string",
"metadata": {

"description": "The admin user name of the VM"

},

"adminPassword": {

"type": "securestring",

"metadata": {

"description": "The admin password of the VM"

},

"storageWorkloadType": {

"type": "string",
"defaultValue": "General",

"allowedValues": [

"General",

"OLTP",

"DW"

],

"metadata": {

"description": "SQL Server Workload Type"

},

"sqlDataDisksCount": {

"type": "int",

"defaultValue": 1,

"maxValue": 8,

"minValue": 1,

"metadata": {

"description": "Amount of data disks (1TB each) for SQL Data files"

},

"dataPath": {

"type": "string",
"defaultValue": "F:\\SQLData",

"metadata": {

"description": "Path for SQL Data files. Please choose drive letter
from F to Z, and other drives from A to E are reserved for system"

},

"sqlLogDisksCount": {

"type": "int",

"defaultValue": 1,

"maxValue": 8,

"minValue": 1,

"metadata": {

"description": "Amount of data disks (1TB each) for SQL Log files"

},

"logPath": {

"type": "string",
"defaultValue": "G:\\SQLLog",

"metadata": {

"description": "Path for SQL Log files. Please choose drive letter
from F to Z and different than the one used for SQL data. Drive letter from
A to E are reserved for system"

},

"location": {

"type": "string",
"defaultValue": "[resourceGroup().location]",

"metadata": {

"description": "Location for all resources."

},

"secureBoot": {

"type": "bool",

"defaultValue": true,

"metadata": {

"description": "Secure Boot setting of the virtual machine."

},

"vTPM": {

"type": "bool",

"defaultValue": true,

"metadata": {

"description": "vTPM setting of the virtual machine."

},

"variables": {

"networkInterfaceName": "[format('{0}-nic',
parameters('virtualMachineName'))]",

"networkSecurityGroupName": "[format('{0}-nsg',
parameters('virtualMachineName'))]",

"networkSecurityGroupRules": [

"name": "RDP",

"properties": {

"priority": 300,

"protocol": "Tcp",

"access": "Allow",

"direction": "Inbound",

"sourceAddressPrefix": "*",

"sourcePortRange": "*",

"destinationAddressPrefix": "*",

"destinationPortRange": "3389"

],

"publicIpAddressName": "[format('{0}-publicip-{1}',
parameters('virtualMachineName'),
uniqueString(parameters('virtualMachineName')))]",

"publicIpAddressType": "Dynamic",

"publicIpAddressSku": "Basic",

"diskConfigurationType": "NEW",

"nsgId": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]",

"subnetRef": "[resourceId(parameters('existingVnetResourceGroup'),
'Microsoft.Network/virtualNetWorks/subnets',
parameters('existingVirtualNetworkName'),
parameters('existingSubnetName'))]",

"dataDisksLuns": "[range(0, parameters('sqlDataDisksCount'))]",

"logDisksLuns": "[range(parameters('sqlDataDisksCount'),
parameters('sqlLogDisksCount'))]",

"dataDisks": {

"createOption": "Empty",

"caching": "ReadOnly",

"writeAcceleratorEnabled": false,

"storageAccountType": "Premium_LRS",

"diskSizeGB": 1023

},

"tempDbPath": "D:\\SQLTemp",

"extensionName": "GuestAttestation",

"extensionPublisher": "Microsoft.Azure.Security.WindowsAttestation",

"extensionVersion": "1.0",

"maaTenantName": "GuestAttestation"

},

"resources": [

"type": "Microsoft.Network/publicIPAddresses",

"apiVersion": "2022-01-01",

"name": "[variables('publicIpAddressName')]",

"location": "[parameters('location')]",

"sku": {

"name": "[variables('publicIpAddressSku')]"

},

"properties": {

"publicIPAllocationMethod": "[variables('publicIpAddressType')]"

},

"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2022-01-01",

"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",

"properties": {

"securityRules": "[variables('networkSecurityGroupRules')]"

},

"type": "Microsoft.Network/networkInterfaces",

"apiVersion": "2022-01-01",

"name": "[variables('networkInterfaceName')]",

"location": "[parameters('location')]",

"properties": {

"ipConfigurations": [

"name": "ipconfig1",

"properties": {

"subnet": {

"id": "[variables('subnetRef')]"

},

"privateIPAllocationMethod": "Dynamic",

"publicIPAddress": {

"id": "[resourceId('Microsoft.Network/publicIPAddresses',
variables('publicIpAddressName'))]"

],

"enableAcceleratedNetworking": true,

"networkSecurityGroup": {

"id": "[variables('nsgId')]"

},

"dependsOn": [

"[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]",

"[resourceId('Microsoft.Network/publicIPAddresses',
variables('publicIpAddressName'))]"

},

"type": "Microsoft.Compute/virtualMachines",

"apiVersion": "2022-03-01",

"name": "[parameters('virtualMachineName')]",

"location": "[parameters('location')]",

"properties": {

"hardwareProfile": {

"vmSize": "[parameters('virtualMachineSize')]"

},

"storageProfile": {

"copy": [

"name": "dataDisks",

"count": "[length(range(0, length(range(0,


add(parameters('sqlDataDisksCount'), parameters('sqlLogDisksCount'))))))]",

"input": {

"lun": "[range(0, add(parameters('sqlDataDisksCount'),


parameters('sqlLogDisksCount')))[range(0, length(range(0,
add(parameters('sqlDataDisksCount'), parameters('sqlLogDisksCount')))))
[copyIndex('dataDisks')]]]",

"createOption": "[variables('dataDisks').createOption]",

"caching": "[if(greaterOrEquals(range(0,
add(parameters('sqlDataDisksCount'), parameters('sqlLogDisksCount')))
[range(0, length(range(0, add(parameters('sqlDataDisksCount'),
parameters('sqlLogDisksCount')))))[copyIndex('dataDisks')]],
parameters('sqlDataDisksCount')), 'None', variables('dataDisks').caching)]",

"writeAcceleratorEnabled": "
[variables('dataDisks').writeAcceleratorEnabled]",

"diskSizeGB": "[variables('dataDisks').diskSizeGB]",

"managedDisk": {

"storageAccountType": "
[variables('dataDisks').storageAccountType]"

],

"osDisk": {

"createOption": "FromImage",

"managedDisk": {

"storageAccountType": "Premium_LRS"

},

"imageReference": {

"publisher": "MicrosoftSQLServer",

"offer": "[parameters('imageOffer')]",

"sku": "[parameters('sqlSku')]",

"version": "latest"

},

"networkProfile": {

"networkInterfaces": [

"id": "[resourceId('Microsoft.Network/networkInterfaces',
variables('networkInterfaceName'))]"

},

"osProfile": {

"computerName": "[parameters('virtualMachineName')]",

"adminUsername": "[parameters('adminUsername')]",

"adminPassword": "[parameters('adminPassword')]",

"windowsConfiguration": {

"enableAutomaticUpdates": true,

"provisionVMAgent": true

},

"securityProfile": {

"uefiSettings": {

"secureBootEnabled": "[parameters('secureBoot')]",

"vTpmEnabled": "[parameters('vTPM')]"

},

"securityType": "TrustedLaunch"

},

"dependsOn": [

"[resourceId('Microsoft.Network/networkInterfaces',
variables('networkInterfaceName'))]"

},

"condition": "[and(parameters('vTPM'), parameters('secureBoot'))]",

"type": "Microsoft.Compute/virtualMachines/extensions",

"apiVersion": "2022-03-01",

"name": "[format('{0}/{1}', parameters('virtualMachineName'),


variables('extensionName'))]",

"location": "[parameters('location')]",

"properties": {

"publisher": "[variables('extensionPublisher')]",

"type": "[variables('extensionName')]",

"typeHandlerVersion": "[variables('extensionVersion')]",

"autoUpgradeMinorVersion": true,

"enableAutomaticUpgrade": true,

"settings": {

"AttestationConfig": {

"MaaSettings": {

"maaEndpoint": "",

"maaTenantName": "[variables('maaTenantName')]"

},

"AscSettings": {

"ascReportingEndpoint": "",

"ascReportingFrequency": ""

},

"useCustomToken": "false",

"disableAlerts": "false"

},

"dependsOn": [

"[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]"

},

"type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",

"apiVersion": "2022-07-01-preview",

"name": "[parameters('virtualMachineName')]",

"location": "[parameters('location')]",

"properties": {

"virtualMachineResourceId": "
[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]",

"sqlManagement": "Full",

"sqlServerLicenseType": "PAYG",

"storageConfigurationSettings": {

"diskConfigurationType": "[variables('diskConfigurationType')]",

"storageWorkloadType": "[parameters('storageWorkloadType')]",

"sqlDataSettings": {

"luns": "[variables('dataDisksLuns')]",

"defaultFilePath": "[parameters('dataPath')]"

},

"sqlLogSettings": {

"luns": "[variables('logDisksLuns')]",

"defaultFilePath": "[parameters('logPath')]"

},

"sqlTempDbSettings": {

"defaultFilePath": "[variables('tempDbPath')]"

},

"dependsOn": [

"[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]"

],

"outputs": {

"adminUsername": {

"type": "string",
"value": "[parameters('adminUsername')]"

Five Azure resources are defined in the template:

Microsoft.Network/publicIpAddresses: Creates a public IP address.


Microsoft.Network/networkSecurityGroups: Creates a network security group.
Microsoft.Network/networkInterfaces: Configures the network interface.
Microsoft.Compute/virtualMachines: Creates a virtual machine in Azure.
Microsoft.SqlVirtualMachine/SqlVirtualMachines: registers the virtual machine with
the SQL IaaS Agent extension.

More SQL Server on Azure VM templates can be found in the quickstart template
gallery .

Deploy the template


1. Select the following image to sign in to Azure and open a template. The template
creates a virtual machine with the intended SQL Server version installed to it, and
registered with the SQL IaaS Agent extension.

2. Select or enter the following values.

Subscription: Select an Azure subscription.


Resource group: The prepared resource group for your SQL Server VM.
Region: Select a region. For example, Central US.
Virtual Machine Name: Enter a name for SQL Server virtual machine.
Virtual Machine Size: Choose the appropriate size for your virtual machine
from the drop-down.
Existing Virtual Network Name: Enter the name of the prepared virtual
network for your SQL Server VM.
Existing Vnet Resource Group: Enter the resource group where your virtual
network was prepared.
Existing Subnet Name: The name of your prepared subnet.
Image Offer: Choose the SQL Server and Windows Server image that best
suits your business needs.
SQL Sku: Choose the edition of SQL Server SKU that best suits your business
needs.
Admin Username: The username for the administrator of the virtual machine.
Admin Password: The password used by the VM administrator account.
Storage Workload Type: The type of storage for the workload that best
matches your business.
Sql Data Disks Count: The number of disks SQL Server uses for data files.
Data Path: The path for the SQL Server data files.
Sql Log Disks Count: The number of disks SQL Server uses for log files.
Log Path: The path for the SQL Server log files.
Location: The location for all of the resources, this value should remain the
default of [resourceGroup().location] .

3. Select Review + create. After the SQL Server VM has been deployed successfully,
you get a notification.

The Azure portal is used to deploy the template. In addition to the Azure portal, you can
also use Azure PowerShell, the Azure CLI, and REST API. To learn other deployment
methods, see Deploy templates.

Review deployed resources


You can use the Azure CLI to check deployed resources.

Azure CLI

echo "Enter the resource group where your SQL Server VM exists:" &&

read resourcegroupName &&

az resource list --resource-group $resourcegroupName

Clean up resources
When no longer needed, delete the resource group by using Azure CLI or Azure
PowerShell:

CLI

Azure CLI

echo "Enter the Resource Group name:" &&

read resourceGroupName &&

az group delete --name $resourceGroupName &&

echo "Press [ENTER] to continue ..."

Next steps
For a step-by-step tutorial that guides you through the process of creating a template,
see:

Tutorial: Create and deploy your first ARM template

For other ways to deploy a SQL Server VM, see:

Azure portal
PowerShell

To learn more, see an overview of SQL Server on Azure VMs.


Business continuity and HADR for SQL
Server on Azure Virtual Machines
Article • 04/03/2023

Applies to:
SQL Server on Azure VM

Business continuity means continuing your business in the event of a disaster, planning
for recovery, and ensuring that your data is highly available. SQL Server on Azure Virtual
Machines can help lower the cost of a high-availability and disaster recovery (HADR)
database solution.

Most SQL Server HADR solutions are supported on virtual machines (VMs), as both
Azure-only and hybrid solutions. In an Azure-only solution, the entire HADR system runs
in Azure. In a hybrid configuration, part of the solution runs in Azure and the other part
runs on-premises in your organization. The flexibility of the Azure environment enables
you to move partially or completely to Azure to satisfy the budget and HADR
requirements of your SQL Server database systems.

This article compares and contrasts the business continuity solutions available for SQL
Server on Azure VMs.

Overview
It's up to you to ensure that your database system has the HADR capabilities that the
service-level agreement (SLA) requires. The fact that Azure provides high-availability
mechanisms, such as service healing for cloud services and failure recovery detection for
virtual machines, does not itself guarantee that you can meet the SLA. Although these
mechanisms help protect the high availability of the virtual machine, they don't protect
the availability of SQL Server running inside the VM.

It's possible for the SQL Server instance to fail while the VM is online and healthy. Even
the high-availability mechanisms provided by Azure allow for downtime of the VMs due
to events like recovery from software or hardware failures and operating system
upgrades.

Geo-redundant storage (GRS) in Azure is implemented with a feature called geo-


replication. GRS might not be an adequate disaster recovery solution for your databases.
Because geo-replication sends data asynchronously, recent updates can be lost in a
disaster. More information about geo-replication limitations is covered in the Geo-
replication support section.
7 Note

It's now possible to lift and shift both your failover cluster instance and availability
group solution to SQL Server on Azure VMs using Azure Migrate.

Deployment architectures
Azure supports these SQL Server technologies for business continuity:

Always On availability groups


Always On failover cluster instances (FCIs)
Log shipping
SQL Server backup and restore with Azure Blob storage
Database mirroring - Deprecated in SQL Server 2016
Azure Site Recovery

You can combine the technologies to implement a SQL Server solution that has both
high-availability and disaster recovery capabilities. Depending on the technology that
you use, a hybrid deployment might require a VPN tunnel with the Azure virtual
network. The following sections show you some example deployment architectures.

Azure only: High-availability solutions


You can have a high-availability solution for SQL Server at a database level with Always
On availability groups. You can also create a high-availability solution at an instance
level with Always On failover cluster instances. For additional protection, you can create
redundancy at both levels by creating availability groups on failover cluster instances.

Technology Example architectures


Technology Example architectures

Availability Availability replicas running in Azure VMs in the same region provide high
groups availability. You need to configure a domain controller VM, because Windows
failover clustering requires an Active Directory domain.

For higher redundancy and availability, the Azure VMs can be deployed in different
availability zones as documented in the availability group overview.

To get started, review theavailability group tutorial.


Technology Example architectures

Failover Failover cluster instances are supported on SQL Server VMs. Because the FCI
cluster feature requires shared storage, five solutions will work with SQL Server on Azure
instances VMs:

- Using Azure shared disks for Windows Server 2019. Shared managed disks are an
Azure product that allows attaching a managed disk to multiple virtual machines
simultaneously. VMs in the cluster can read or write to your attached disk based on
the reservation chosen by the clustered application through SCSI Persistent
Reservations (SCSI PR). SCSI PR is an industry-standard storage solution that's used
by applications running on a storage area network (SAN) on-premises. Enabling
SCSI PR on a managed disk allows you to migrate these applications to Azure as is.

- Using Storage Spaces Direct (S2D) to provide a software-based virtual SAN for
Windows Server 2016 and later.

- Using a Premium file share for Windows Server 2012 and later. Premium file
shares are SSD backed, have consistently low latency, and are fully supported for
use with FCI.

- Using storage supported by a partner solution for clustering. For a specific


example that uses SIOS DataKeeper, see the blog entry Failover clustering and SIOS
DataKeeper .

- Using shared block storage for a remote iSCSI target via Azure ExpressRoute. For
example, NetApp Private Storage (NPS) exposes an iSCSI target via ExpressRoute
with Equinix to Azure VMs.

For shared storage and data replication solutions from Microsoft partners, contact
the vendor for any issues related to accessing data on failover.

To get started, prepare your VM for FCI

Azure only: Disaster recovery solutions


You can have a disaster recovery solution for your SQL Server databases in Azure by
using availability groups, database mirroring, or backup and restore with storage blobs.

Technology Example architectures


Technology Example architectures

Availability Availability replicas running across multiple datacenters in Azure VMs for disaster
groups recovery. This cross-region solution helps protect against a complete site outage.

Within a region, all replicas should be within the same cloud service and the same
virtual network. Because each region will have a separate virtual network, these
solutions require network-to-network connectivity. For more information, see
Configure a network-to-network connection by using the Azure portal. For detailed
instructions, see Configure a SQL Server Always On availability group across
different Azure regions.

Database Principal and mirror and servers running in different datacenters for disaster
mirroring recovery. You must deploy them by using server certificates. SQL Server database
mirroring is not supported for SQL Server 2008 or SQL Server 2008 R2 on an Azure
VM.

Technology Example architectures

Backup and Production databases backed up directly to Blob storage in a different datacenter
restore with for disaster recovery.

Azure Blob
storage

For more information, see Backup and restore for SQL Server on Azure VMs.

Replicate Production SQL Server instance in one Azure datacenter replicated directly to
and fail Azure Storage in a different Azure datacenter for disaster recovery.

over SQL
Server to
Azure with
Azure Site
Recovery

For more information, see Protect SQL Server using SQL Server disaster recovery
and Azure Site Recovery.

Hybrid IT: Disaster recovery solutions


You can have a disaster recovery solution for your SQL Server databases in a hybrid IT
environment by using availability groups, database mirroring, log shipping, and backup
and restore with Azure Blob storage.

Technology Example Architectures


Technology Example Architectures

Availability Some availability replicas running in Azure VMs and other replicas running
groups on-premises for cross-site disaster recovery. The production site can be either
on-premises or in an Azure datacenter.

Because all availability replicas must be in the same failover cluster, the
cluster must span both networks (a multi-subnet failover cluster). This
configuration requires a VPN connection between Azure and the on-premises
network.

For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site. To get started, review
theavailability group tutorial.
Technology Example Architectures

Database One partner running in an Azure VM and the other running on-premises for
mirroring cross-site disaster recovery by using server certificates. Partners don't need to
be in the same Active Directory domain, and no VPN connection is required.

Another database mirroring scenario involves one partner running in an


Azure VM and the other running on-premises in the same Active Directory
domain for cross-site disaster recovery. A VPN connection between the Azure
virtual network and the on-premises network is required.

For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site. SQL Server database
mirroring is not supported for SQL Server 2008 or SQL Server 2008 R2 on an
Azure VM.

Log shipping One server running in an Azure VM and the other running on-premises for
cross-site disaster recovery. Log shipping depends on Windows file sharing,
so a VPN connection between the Azure virtual network and the on-premises
network is required.

For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site.
Technology Example Architectures

Backup and On-premises production databases backed up directly to Azure Blob storage
restore with for disaster recovery.

Azure Blob
storage

For more information, see Backup and restore for SQL Server on Azure Virtual
Machines.

Replicate and fail On-premises production SQL Server instance replicated directly to Azure
over SQL Server Storage for disaster recovery.

to Azure with
Azure Site
Recovery

For more information, see Protect SQL Server using SQL Server disaster
recovery and Azure Site Recovery.

Free DR replica in Azure


If you have Software Assurance , you can implement hybrid disaster recovery (DR)
plans with SQL Server without incurring additional licensing costs for the passive
disaster recovery instance.

For example, you can have two free passive secondaries when all three replicas are
hosted in Azure:
Or you can configure a hybrid failover environment, with a licensed primary on-
premises, one free passive for HA, one free passive for DR on-premises, and one free
passive for DR in Azure:
For more information, see the product licensing terms .

To enable this benefit, go to your SQL Server virtual machine resource. Select Configure
under Settings, and then choose the HA/DR option under SQL Server License. Select
the check box to confirm that this SQL Server VM will be used as a passive replica, and
then select Apply to save your settings. Note that when all three replicas are hosted in
Azure, pay-as-you-go customers are also entitled to use the HA/DR license type.

Important considerations for SQL Server HADR


in Azure
Azure VMs, storage, and networking have different operational characteristics than an
on-premises, non-virtualized IT infrastructure. A successful implementation of an HADR
SQL Server solution in Azure requires that you understand these differences and design
your solution to accommodate them.

High-availability nodes in an availability set


Availability sets in Azure enable you to place the high-availability nodes into separate
fault domains and update domains. The Azure platform assigns an update domain and a
fault domain to each virtual machine in your availability set. This configuration within a
datacenter ensures that during either a planned or unplanned maintenance event, at
least one virtual machine is available and meets the Azure SLA of 99.95 percent.

To configure a high-availability setup, place all participating SQL Server virtual machines
in the same availability set to avoid application or data loss during a maintenance event.
Only nodes in the same cloud service can participate in the same availability set. For
more information, see Manage the availability of virtual machines.

High-availability nodes in an availability zone


Availability zones are unique physical locations within an Azure region. Each zone
consists of one or more datacenters equipped with independent power, cooling, and
networking. The physical separation of availability zones within a region helps protect
applications and data from datacenter failures by ensuring that at least one virtual
machine is available and meets the Azure SLA of 99.99 percent.

To configure high availability, place participating SQL Server virtual machines spread
across availability zones in the region. There will be additional charges for network-to-
network transfers between availability zones. For more information, see Availability
zones.

Network latency in hybrid IT


Deploy your HADR solution with the assumption that there might be periods of high
network latency between your on-premises network and Azure. When you're deploying
replicas to Azure, use asynchronous commit instead of synchronous commit for the
synchronization mode. When you're deploying database mirroring servers both on-
premises and in Azure, use the high-performance mode instead of the high-safety
mode.
See the HADR configuration best practices for cluster and HADR settings that can help
accommodate the cloud environment.

Geo-replication support
Geo-replication in Azure disks does not support the data file and log file of the same
database to be stored on separate disks. GRS replicates changes on each disk
independently and asynchronously. This mechanism guarantees the write order within a
single disk on the geo-replicated copy, but not across geo-replicated copies of multiple
disks. If you configure a database to store its data file and its log file on separate disks,
the recovered disks after a disaster might contain a more up-to-date copy of the data
file than the log file, which breaks the write-ahead log in SQL Server and the ACID
properties (atomicity, consistency, isolation, and durability) of transactions.

If you don't have the option to disable geo-replication on the storage account, keep all
data and log files for a database on the same disk. If you must use more than one disk
due to the size of the database, deploy one of the disaster recovery solutions listed
earlier to ensure data redundancy.

Next steps
Decide if an availability group or a failover cluster instance is the best business
continuity solution for your business. Then review the best practices for configuring your
environment for high availability and disaster recovery.
Backup and restore for SQL Server on
Azure VMs
Article • 06/27/2023

Applies to:
SQL Server on Azure VM

This article provides guidance on the backup and restore options available for SQL
Server running on a Windows virtual machine (VM) in Azure. Azure Storage maintains
three copies of every Azure VM disk to guarantee protection against data loss or
physical data corruption. Thus, unlike SQL Server on-premises, you don't need to focus
on hardware failures. However, you should still back up your SQL Server databases to
protect against application or user errors, such as inadvertent data insertions or
deletions. In this situation, it is important to be able to restore to a specific point in time.

The first part of this article provides an overview of the available backup and restore
options. This is followed by sections that provide more information on each strategy.

Backup and restore options


The following table provides information on various backup and restore options for SQL
Server on Azure VMs:

Strategy SQL Description


versions

Automated 2014 Automated Backup allows you to schedule regular backups for all
Backup and later databases on a SQL Server VM. Backups are stored in Azure storage for
up to 30 days. Beginning with SQL Server 2016, Automated Backup
offers additional options such as configuring manual scheduling and the
frequency of full and log backups.

Azure 2008 Azure Backup provides an Enterprise class backup capability for SQL
Backup for and later Server on Azure VMs. With this service, you can centrally manage
SQL VMs backups for multiple servers and thousands of databases. Databases can
be restored to a specific point in time in the portal. It offers a
customizable retention policy that can maintain backups for years.

Manual All Depending on your version of SQL Server, there are various techniques
backup to manually backup and restore SQL Server on Azure VM. In this
scenario, you are responsible for how your databases are backed up and
the storage location and management of these backups.
The following sections describe each option in more detail. The final section of this
article provides a summary in the form of a feature matrix.

Automated Backup
Automated Backup provides an automatic backup service for SQL Server Standard and
Enterprise editions running on a Windows VM in Azure. This service is provided by the
SQL Server IaaS Agent Extension, which is automatically installed on SQL Server
Windows virtual machine images in the Azure portal.

All databases are backed up to an Azure storage account that you configure. Backups
can be encrypted and retained for up to 90 days.

SQL Server 2016 and higher VMs offer more customization options with Automated
Backup. These improvements include:

System database backups


Manual backup schedule and time window
Full and log file backup frequency

To restore a database, you must locate the required backup file(s) in the storage account
and perform a restore on your SQL VM using SQL Server Management Studio (SSMS) or
Transact-SQL commands.

For more information on how to configure Automated Backup for SQL VMs, see one of
the following articles:

SQL Server 2016 and later: Automated Backup for Azure Virtual Machines
SQL Server 2014: Automated Backup for SQL Server 2014 Virtual Machines

Azure Backup for SQL VMs


Azure Backup provides an Enterprise class backup capability for SQL Server on Azure
VMs. All backups are stored and managed in a Recovery Services vault. There are several
advantages that this solution provides, especially for Enterprises:

Zero-infrastructure backup: You do not have to manage backup servers or storage


locations.
Scale: Protect many SQL VMs and thousands of databases.
Pay-As-You-Go: This capability is a separate service provided by Azure Backup, but
as with all Azure services, you only pay for what you use.
Central management and monitoring: Centrally manage all of your backups,
including other workloads that Azure Backup supports, from a single dashboard in
Azure.
Policy driven backup and retention: Create standard backup policies for regular
backups. Establish retention policies to maintain backups for years.
Support for SQL Always On: Detect and protect a SQL Server Always On
configuration and honor the backup Availability Group backup preference.
15-minute Recovery Point Objective (RPO): Configure SQL transaction log
backups up to every 15 minutes.
Point in time restore: Use the portal to recover databases to a specific point in
time without having to manually restore multiple full, differential, and log backups.
Consolidated email alerts for failures: Configure consolidated email notifications
for any failures.
Azure role-based access control: Determine who can manage backup and restore
operations through the portal.

This Azure Backup solution for SQL VMs is generally available. For more information, see
Back up SQL Server database to Azure.

Manual backup
If you want to manually manage backup and restore operations on your SQL VMs, there
are several options depending on the version of SQL Server you are using. For an
overview of backup and restore, see one of the following articles based on your version
of SQL Server:

Backup and restore for SQL Server 2016 and later


Backup and restore for SQL Server 2014
Backup and restore for SQL Server 2012
Backup and restore for SQL Server SQL Server 2008 R2
Backup and restore for SQL Server 2008

The following sections describe several manual backup and restore options in more
detail.

Backup to attached disks


For SQL Server on Azure VMs, you can use native backup and restore techniques using
attached disks on the VM for the destination of the backup files. However, there is a
limit to the number of disks you can attach to an Azure virtual machine, based on the
size of the virtual machine. There is also the overhead of disk management to consider.
For an example of how to manually create a full database backup using SQL Server
Management Studio (SSMS) or Transact-SQL, see Create a Full Database Backup.

Backup to URL
Beginning with SQL Server 2012 SP1 CU2, you can back up and restore directly to
Microsoft Azure Blob storage, which is also known as backup to URL. SQL Server 2016
also introduced the following enhancements for this feature:

2016 Details
enhancement

Striping When backing up to Microsoft Azure Blob Storage, SQL Server 2016 supports
backing up to multiple blobs to enable backing up large databases, up to a
maximum of 12.8 TB.

Snapshot Through the use of Azure snapshots, SQL Server File-Snapshot Backup provides
Backup nearly instantaneous backups and rapid restores for database files stored using
Azure Blob Storage. This capability enables you to simplify your backup and
restore policies. File-snapshot backup also supports point in time restore. For
more information, see Snapshot Backups for Database Files in Azure.

For more information, see the one of the following articles based on your version of SQL
Server:

SQL Server 2016 and later: SQL Server Backup to URL


SQL Server 2014: SQL Server 2014 Backup to URL
SQL Server 2012: SQL Server 2012 Backup to URL

Managed Backup
Beginning with SQL Server 2014, Managed Backup automates the creation of backups to
Azure storage. Behind the scenes, Managed Backup makes use of the Backup to URL
feature described in the previous section of this article. Managed Backup is also the
underlying feature that supports the SQL Server VM Automated Backup service.

Beginning in SQL Server 2016, Managed Backup got additional options for scheduling,
system database backup, and full and log backup frequency.

For more information, see one of the following articles based on your version of SQL
Server:

Managed Backup to Microsoft Azure for SQL Server 2016 and later
Managed Backup to Microsoft Azure for SQL Server 2014
Decision matrix
The following table summarizes the capabilities of each backup and restore option for
SQL Server virtual machines in Azure.

Option Automated Azure Backup Manual backup


Backup for SQL

Requires additional Azure service

Configure backup policy in Azure portal

Restore databases in Azure portal

Manage multiple servers in one


dashboard

Point-in-time restore

15-minute Recovery Point Objective


(RPO)

Short-term backup retention policy


(days)

Long-term backup retention policy


(months, years)

Built-in support for SQL Server Always


On

Backup to Azure Storage account(s)


(automatic)
(customer
(automatic) managed)

Management of storage and backup


files

Backup to attached disks on the VM

Central customizable backup reports

Consolidated email alerts for failures

Customize monitoring based on Azure


Monitor logs

Monitor backup jobs with SSMS or


Transact-SQL scripts

Restore databases with SSMS or


Transact-SQL scripts
Next steps
If you are planning your deployment of SQL Server on Azure VM, you can find
provisioning guidance in the following guide: How to provision a Windows SQL Server
virtual machine in the Azure portal.

Although backup and restore can be used to migrate your data, there are potentially
easier data migration paths to SQL Server on VM. For a full discussion of migration
options and recommendations, see Migration guide: SQL Server to SQL Server on Azure
Virtual Machines.
Use Azure Storage for SQL Server
backup and restore
Article • 03/01/2023

Applies to:
SQL Server on Azure VM

Starting with SQL Server 2012 SP1 CU2, you can now write back up SQL Server
databases directly to Azure Blob storage. Use this functionality to back up to and restore
from Azure Blob storage. Back up to the cloud offers benefits of availability, limitless
geo-replicated off-site storage, and ease of migration of data to and from the cloud.
You can issue BACKUP or RESTORE statements by using Transact-SQL or SMO.

Overview
SQL Server 2016 introduces new capabilities; you can use file-snapshot backup to
perform nearly instantaneous backups and incredibly quick restores.

This topic explains why you might choose to use Azure Storage for SQL Server backups
and then describes the components involved. You can use the resources provided at the
end of the article to access walk-throughs and additional information to start using this
service with your SQL Server backups.

Benefits of using Azure Blob storage for SQL


Server backups
There are several challenges that you face when backing up SQL Server. These
challenges include storage management, risk of storage failure, access to off-site
storage, and hardware configuration. Many of these challenges are addressed by using
Azure Blob storage for SQL Server backups. Consider the following benefits:

Ease of use: Storing your backups in Azure blobs can be a convenient, flexible, and
easy to access off-site option. Creating off-site storage for your SQL Server
backups can be as easy as modifying your existing scripts/jobs to use the BACKUP
TO URL syntax. Off-site storage should typically be far enough from the
production database location to prevent a single disaster that might impact both
the off-site and production database locations. By choosing to geo-replicate your
Azure blobs, you have an extra layer of protection in the event of a disaster that
could affect the whole region.
Backup archive: Azure Blob storage offers a better alternative to the often used
tape option to archive backups. Tape storage might require physical transportation
to an off-site facility and measures to protect the media. Storing your backups in
Azure Blob storage provides an instant, highly available, and a durable archiving
option.
Managed hardware: There is no overhead of hardware management with Azure
services. Azure services manage the hardware and provide geo-replication for
redundancy and protection against hardware failures.
Unlimited storage: By enabling a direct backup to Azure blobs, you have access to
virtually unlimited storage. Alternatively, backing up to an Azure virtual machine
disk has limits based on machine size. There is a limit to the number of disks you
can attach to an Azure virtual machine for backups. This limit is 16 disks for an
extra large instance and fewer for smaller instances.
Backup availability: Backups stored in Azure blobs are available from anywhere
and at any time and can easily be accessed for restores to a SQL Server instance,
without the need for database attach/detach or downloading and attaching the
VHD.
Cost: Pay only for the service that is used. Can be cost-effective as an off-site and
backup archive option. See the Azure pricing calculator , and the Azure Pricing
article for more information.
Storage snapshots: When database files are stored in an Azure blob and you are
using SQL Server 2016, you can use file-snapshot backup to perform nearly
instantaneous backups and incredibly quick restores.

For more details, see SQL Server Backup and Restore with Azure Blob storage.

The following two sections introduce Azure Blob storage, including the required SQL
Server components. It is important to understand the components and their interaction
to successfully use backup and restore from Azure Blob storage.

Azure Blob storage components


The following Azure components are used when backing up to Azure Blob storage.

Component Description

Storage The storage account is the starting point for all storage services. To access Azure
account Blob storage, first create an Azure Storage account. SQL Server is agnostic to the
type of storage redundancy used. Backup to Page blobs and block blobs is
supported for every storage redundancy (LRS\ZRS\GRS\RA-GRS\RA-GZRS\etc.).
For more information about Azure Blob storage, see How to use Azure Blob
storage.
Component Description

Container A container provides a grouping of a set of blobs, and can store an unlimited
number of Blobs. To write a SQL Server backup to Azure Blob storage, you must
have at least the root container created.

Blob A file of any type and size. Blobs are addressable using the following URL format:
https://<storageaccount>.blob.core.windows.net/<container>/<blob> . For more
information about page Blobs, see Understanding Block and Page Blobs

SQL Server components


The following SQL Server components are used when backing up to Azure Blob storage.

Component Description

URL A URL specifies a Uniform Resource Identifier (URI) to a unique backup file. The
URL provides the location and name of the SQL Server backup file. The URL must
point to an actual blob, not just a container. If the blob does not exist, Azure
creates it. If an existing blob is specified, the backup command fails, unless the
WITH FORMAT option is specified. The following is an example of the URL you would
specify in the BACKUP command:
https://<storageaccount>.blob.core.windows.net/<container>/<FILENAME.bak> .

HTTPS is recommended but not required.

Credential The information that is required to connect and authenticate to Azure Blob storage
is stored as a credential. In order for SQL Server to write backups to an Azure Blob
or restore from it, a SQL Server credential must be created. For more information,
see SQL Server Credential.

7 Note

SQL Server 2016 has been updated to support block blobs. Please see Tutorial: Use
Microsoft Azure Blob Storage with SQL Server databases for more details.

Next steps
1. Create an Azure account if you don't already have one. If you are evaluating Azure,
consider the free trial .

2. Then go through one of the following tutorials that walk you through creating a
storage account and performing a restore.
SQL Server 2014: Tutorial: SQL Server 2014 Backup and Restore to Microsoft
Azure Blob storage.
SQL Server 2016: Tutorial: Using the Microsoft Azure Blob Storage with SQL
Server databases

3. Review additional documentation starting with SQL Server Backup and Restore
with Microsoft Azure Blob storage.

If you have any problems, review the topic SQL Server Backup to URL Best Practices and
Troubleshooting.

For other SQL Server backup and restore options, see Backup and Restore for SQL
Server on Azure Virtual Machines.
Always On availability group on SQL
Server on Azure VMs
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

This article introduces Always On availability groups (AG) for SQL Server on Azure Virtual
Machines (VMs).

To get started, see the availability group tutorial.

Overview
Always On availability groups on Azure Virtual Machines are similar to Always On
availability groups on-premises, and rely on the underlying Windows Server Failover
Cluster. However, since the virtual machines are hosted in Azure, there are a few
additional considerations as well, such as VM redundancy, and routing traffic on the
Azure network.

The following diagram illustrates an availability group for SQL Server on Azure VMs:

7 Note

It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs using Azure Migrate. See Migrate availability group to learn more.
VM redundancy
To increase redundancy and high availability, SQL Server VMs should either be in the
same availability set, or different availability zones.

Placing a set of VMs in the same availability set protects from outages within a data
center caused by equipment failure (VMs within an Availability Set don't share
resources) or from updates (VMs within an availability set aren't updated at the same
time).

Availability Zones protect against the failure of an entire data center, with each Zone
representing a set of data centers within a region. By ensuring resources are placed in
different Availability Zones, no data center-level outage can take all of your VMs offline.

When creating Azure VMs, you must choose between configuring Availability Sets vs
Availability Zones. An Azure VM can't participate in both.

While Availability Zones may provide better availability than Availability Sets (99.99% vs
99.95%), performance should also be a consideration. VMs within an Availability Set can
be placed in a proximity placement group which guarantees they're close to each other,
minimizing network latency between them. VMs located in different Availability Zones
have greater network latency between them, which can increase the time it takes to
synchronize data between the primary and secondary replica(s). This may cause delays
on the primary replica as well as increase the chance of data loss in the event of an
unplanned failover. It's important to test the proposed solution under load and ensure
that it meets SLAs for both performance and availability.

Connectivity
To match the on-premises experience for connecting to your availability group listener,
deploy your SQL Server VMs to multiple subnets within the same virtual network.
Having multiple subnets negates the need for the extra dependency on an Azure Load
Balancer, or a distributed network name (DNN) to route your traffic to your listener.

If you deploy your SQL Server VMs to a single subnet, you can configure a virtual
network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN)
to route traffic to your availability group listener. Review the differences between the
two and then deploy either a distributed network name (DNN) or a virtual network
name (VNN) for your availability group.
Most SQL Server features work transparently with availability groups when using the
DNN, but there are certain features that may require special consideration. See AG and
DNN interoperability to learn more.

Additionally, there are some behavior differences between the functionality of the VNN
listener and DNN listener that are important to note:

Failover time: Failover time is faster when using a DNN listener since there's no
need to wait for the network load balancer to detect the failure event and change
its routing.
Existing connections: Connections made to a specific database within a failing-over
availability group will close, but other connections to the primary replica will
remain open since the DNN stays online during the failover process. This is
different than a traditional VNN environment where all connections to the primary
replica typically close when the availability group fails over, the listener goes
offline, and the primary replica transitions to the secondary role. When using a
DNN listener, you may need to adjust application connection strings to ensure that
connections are redirected to the new primary replica upon failover.
Open transactions: Open transactions against a database in a failing-over
availability group will close and roll back, and you need to manually reconnect. For
example, in SQL Server Management Studio, close the query window and open a
new one.

Setting up a VNN listener in Azure requires a load balancer. There are two main options
for load balancers in Azure: external (public) or internal. The external (public) load
balancer is internet-facing and is associated with a public virtual IP that's accessible over
the internet. An internal load balancer supports only clients within the same virtual
network. For either load balancer type, you must enable Direct Server Return.

You can still connect to each availability replica separately by connecting directly to the
service instance. Also, because availability groups are backward compatible with
database mirroring clients, you can connect to the availability replicas like database
mirroring partners as long as the replicas are configured similarly to database mirroring:

There's one primary replica and one secondary replica.


The secondary replica is configured as nonreadable (Readable Secondary option
set to No).

The following is an example client connection string that corresponds to this database
mirroring-like configuration using ADO.NET or SQL Server Native Client:

Console
Data Source=ReplicaServer1;Failover Partner=ReplicaServer2;Initial
Catalog=AvailabilityDatabase;

For more information on client connectivity, see:

Using Connection String Keywords with SQL Server Native Client


Connect Clients to a Database Mirroring Session (SQL Server)
Connecting to Availability Group Listener in Hybrid IT
Availability Group Listeners, Client Connectivity, and Application Failover (SQL
Server)
Using Database-Mirroring Connection Strings with Availability Groups

Single subnet requires load balancer


When you create an availability group listener on a traditional on-premises Windows
Server Failover Cluster (WSFC), a DNS record gets created for the listener with the IP
address you provide, and this IP address maps to the MAC address of the current
Primary replica in the ARP tables of switches and routers on the on-premises network.
The cluster does this by using Gratuitous ARP (GARP), where it broadcasts the latest IP-
to-MAC address mapping to the network whenever a new Primary is selected after
failover. In this case, the IP address is for the listener, and the MAC is of the current
Primary replica. The GARP forces an update to the ARP table entries for the switches and
routers, and to any users connecting to the listener IP address are routed seamlessly to
the current Primary replica.

For security reasons, broadcasting on any public cloud (Azure, Google, AWS) isn't
allowed, so the uses of ARPs and GARPs on Azure isn't supported. To overcome this
difference in networking environments, SQL Server VMs in a single subnet availability
group rely on load balancers to route traffic to the appropriate IP addresses. Load
balancers are configured with a frontend IP address that corresponds to the listener and
a probe port is assigned so that the Azure Load Balancer periodically polls for the status
of the replicas in the availability group. Since only the primary replica SQL Server VM
responds to the TCP probe, incoming traffic is then routed to the VM that successfully
responds to the probe. Additionally, the corresponding probe port is configured as the
WSFC cluster IP, ensuring the Primary replica responds to the TCP probe.

Availability groups configured in a single subnet must either use a load balancer or
distributed network name (DNN) to route traffic to the appropriate replica. To avoid
these dependencies, configure your availability group in multiple subnets so the
availability group listener is configured with an IP address for a replica in each subnet,
and can route traffic appropriately.
If you've already created your availability group in a single subnet, you can migrate it to
a multi-subnet environment.

Lease mechanism
For SQL Server, the AG resource DLL determines the health of the AG based on the AG
lease mechanism and Always On health detection. The AG resource DLL exposes
resource health through the IsAlive operation. The resource monitor polls IsAlive at the
cluster heartbeat interval, which is set by the CrossSubnetDelay and SameSubnetDelay
cluster-wide values. On a primary node, the cluster service initiates failover whenever the
IsAlive call to the resource DLL returns that the AG isn't healthy.

The AG resource DLL monitors the status of internal SQL Server components.
Sp_server_diagnostics reports the health of these components to SQL Server on an
interval controlled by HealthCheckTimeout.

Unlike other failover mechanisms, the SQL Server instance plays an active role in the
lease mechanism. The lease mechanism is used as a LooksAlive validation between the
Cluster resource host and the SQL Server process. The mechanism is used to ensure that
the two sides (the Cluster Service and SQL Server service) are in frequent contact,
checking each other's state and ultimately preventing a split-brain scenario.

When configuring an AG in Azure VMs, there's often a need to configure these


thresholds differently than they would be configured in an on-premises environment. To
configure threshold settings according to best practices for Azure VMs, see the cluster
best practices.

Network configuration
Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to route
traffic to your availability group listener.

On an Azure VM failover cluster, we recommend a single NIC per server (cluster node).
Azure networking has physical redundancy, which makes additional NICs unnecessary
on an Azure VM failover cluster. Although the cluster validation report issues a warning
that the nodes are only reachable on a single network, this warning can be safely
ignored on Azure VM failover clusters.

Basic availability group


As basic availability group doesn't allow more than one secondary replica and there's no
read access to the secondary replica, you can use the database mirroring connection
strings for basic availability groups. Using the connection string eliminates the need to
have listeners. Removing the listener dependency is helpful for availability groups on
Azure VMs as it eliminates the need for a load balancer or having to add additional IPs
to the load balancer when you have multiple listeners for additional databases.

For example, to explicitly connect using TCP/IP to the AG database AdventureWorks on


either Replica_A or Replica_B of a Basic AG (or any AG that that has only one secondary
replica and the read access isn't allowed in the secondary replica), a client application
could supply the following database mirroring connection string to successfully connect
to the AG

Server=Replica_A; Failover_Partner=Replica_B; Database=AdventureWorks;


Network=dbmssocn

Deployment options

 Tip

Eliminate the need for an Azure Load Balancer or distributed network name (DNN)
for your Always On availability group by creating your SQL Server VMs in multiple
subnets within the same Azure virtual network.

There are multiple options for deploying an availability group to SQL Server on Azure
VMs, some with more automation than others.

The following table provides a comparison of the options available:

Azure Azure CLI / Quickstart Manual Manual


portal, PowerShell Templates (single (multi-
subnet) subnet)

SQL Server version 2016 + 2016 + 2016 + 2012 + 2012 +

SQL Server edition Enterprise Enterprise Enterprise Enterprise, Enterprise,


Standard Standard

Windows Server version 2016 + 2016 + 2016 + All All

Creates the cluster for you Yes Yes Yes No No

Creates the availability Yes No No No No


group and listener for you
Azure Azure CLI / Quickstart Manual Manual
portal, PowerShell Templates (single (multi-
subnet) subnet)

Creates listener and load N/A No No Yes N/A


balancer independently

Possible to create DNN N/A No No Yes N/A


listener using this method?

WSFC quorum Cloud Cloud Cloud All All


configuration witness witness witness

DR with multiple regions No No No Yes Yes

Multisubnet support Yes No No N/A Yes

Support for an existing AD Yes Yes Yes Yes Yes

DR with multizone in the Yes Yes Yes Yes Yes


same region

Distributed AG with no AD No No No Yes Yes

Distributed AG with no No No No Yes Yes


cluster

Requires load balancer or No Yes Yes Yes No


DNN

Next steps
To get started, review the HADR best practices, and then deploy your availability group
manually with the availability group tutorial.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups overview
Failover cluster instances with SQL
Server on Azure Virtual Machines
Article • 04/18/2023

Applies to:
SQL Server on Azure VM

This article introduces feature differences when you're working with failover cluster
instances (FCI) for SQL Server on Azure Virtual Machines (VMs).

To get started, prepare your vm.

Overview
SQL Server on Azure VMs uses Windows Server Failover Clustering (WSFC) functionality
to provide local high availability through redundancy at the server-instance level: a
failover cluster instance. An FCI is a single instance of SQL Server that's installed across
WSFC (or simply the cluster) nodes and, possibly, across multiple subnets. On the
network, an FCI appears to be a single instance of SQL Server running on a single
computer. But the FCI provides failover from one WSFC node to another if the current
node becomes unavailable.

The rest of the article focuses on the differences for failover cluster instances when
they're used with SQL Server on Azure VMs. To learn more about the failover clustering
technology, see:

Windows cluster technologies


SQL Server failover cluster instances

7 Note

It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.

Quorum
Failover cluster instances with SQL Server on Azure Virtual Machines support using a
disk witness, a cloud witness, or a file share witness for cluster quorum.
To learn more, see Quorum best practices with SQL Server VMs in Azure.

Storage
In traditional on-premises clustered environments, a Windows failover cluster uses a
storage area network (SAN) that's accessible by both nodes as the shared storage. SQL
Server files are hosted on the shared storage, and only the active node can access the
files at one time.

SQL Server on Azure VMs offers various options as a shared storage solution for a
deployment of SQL Server failover cluster instances:

Azure shared disks Premium file Storage


shares Spaces Direct
(S2D)

Minimum OS All Windows Server Windows


version 2012 Server 2016

Minimum SQL All SQL Server 2012 SQL Server


Server version 2016

Supported Premium SSD LRS: Availability Sets with Availability sets Availability
VM or without proximity placement group
and availability sets
availability Premium SSD ZRS: Availability Zones
zones
Ultra disks: Same availability zone

Supports Yes No Yes


FileStream

Azure blob No No Yes


cache

The rest of this section lists the benefits and limitations of each storage option available
for SQL Server on Azure VMs.

Azure shared disks


Azure shared disks are a feature of Azure managed disks. Windows Server Failover
Clustering supports using Azure shared disks with a failover cluster instance.

Supported OS: All

Supported SQL version: All

Benefits:
Useful for applications looking to migrate to Azure while keeping their high-
availability and disaster recovery (HADR) architecture as is.
Can migrate clustered applications to Azure as is because of SCSI Persistent
Reservations (SCSI PR) support.
Supports shared Azure Premium SSD and Azure Ultra Disk storage.
Can use a single shared disk or stripe multiple shared disks to create a shared
storage pool.
Supports Filestream.
Premium SSDs support availability sets.
Premium SSDs Zone Redundant Storage (ZRS) supports Availability Zones. VMs
part of FCI can be placed in different availability zones.

7 Note

While Azure shared disks also support Standard SSD sizes, we do not recommend
using Standard SSDs for SQL Server workloads due to the performance limitations.

Limitations:

Premium SSD disk caching is not supported.


Ultra disks do not support availability sets.
Availability zones are supported for Ultra Disks, but the VMs must be in the same
availability zone, which reduces the availability of the virtual machine to 99.9%
Ultra disks do not support Zone Redundant Storage (ZRS)

To get started, see SQL Server failover cluster instance with Azure shared disks.

Storage Spaces Direct


Storage Spaces Direct is a Windows Server feature that is supported with failover
clustering on Azure Virtual Machines. It provides a software-based virtual SAN.

Supported OS: Windows Server 2016 and later

Supported SQL version: SQL Server 2016 and later

Benefits:

Sufficient network bandwidth enables a robust and highly performant shared


storage solution.
Supports Azure blob cache, so reads can be served locally from the cache.
(Updates are replicated simultaneously to both nodes.)
Supports FileStream.
Limitations:

Available only for Windows Server 2016 and later.


Availability zones are not supported.
Requires the same disk capacity attached to both virtual machines.
High network bandwidth is required to achieve high performance because of
ongoing disk replication.
Requires a larger VM size and double pay for storage, because storage is attached
to each VM.

To get started, see SQL Server failover cluster instance with Storage Spaces Direct.

Premium file share


Premium file shares are a feature of Azure Files. Premium file shares are SSD backed and
have consistently low latency. They're fully supported for use with failover cluster
instances for SQL Server 2012 or later on Windows Server 2012 or later. Premium file
shares give you greater flexibility, because you can resize and scale a file share without
any downtime.

Supported OS: Windows Server 2012 and later

Supported SQL version: SQL Server 2012 and later

Benefits:

Shared storage solution for virtual machines spread over multiple availability
zones.
Fully managed file system with single-digit latencies and burstable I/O
performance.

Limitations:

Available only for Windows Server 2012 and later.


FileStream is not supported.

To get started, see SQL Server failover cluster instance with Premium file share.

Partner
There are partner clustering solutions with supported storage.

Supported OS: All

Supported SQL version: All


One example uses SIOS DataKeeper as the storage. For more information, see the blog
entry Failover clustering and SIOS DataKeeper .

iSCSI and ExpressRoute


You can also expose an iSCSI target shared block storage via Azure ExpressRoute.

Supported OS: All

Supported SQL version: All

For example, NetApp Private Storage (NPS) exposes an iSCSI target via ExpressRoute
with Equinix to Azure VMs.

For shared storage and data replication solutions from Microsoft partners, contact the
vendor for any issues related to accessing data on failover.

Connectivity
To match the on-premises experience for connecting to your failover cluster instance,
deploy your SQL Server VMs to multiple subnets within the same virtual network.
Having multiple subnets negates the need for the extra dependency on an Azure Load
Balancer, or a distributed network name (DNN) to route your traffic to your FCI.

If you deploy your SQL Server VMs to a single subnet, you can configure a virtual
network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN)
to route traffic to your failover cluster instance. Review the differences between the two
and then deploy either a distributed network name or a virtual network name for your
failover cluster instance.

The distributed network name is recommended, if possible, as failover is faster, and the
overhead and cost of managing the load balancer is eliminated.

Most SQL Server features work transparently with FCIs when using the DNN, but there
are certain features that may require special consideration. See FCI and DNN
interoperability to learn more.

Limitations
Consider the following limitations for failover cluster instances with SQL Server on Azure
Virtual Machines.

Limited extension support


At this time, SQL Server failover cluster instances on Azure virtual machines registered
with the SQL IaaS Agent extension only support a limited number of features. See the
table of benefits.

If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister from
the extension by deleting the SQL virtual machine resource for the corresponding VMs
and then register it with the SQL IaaS Agent extension again. When you're deleting the
SQL virtual machine resource by using the Azure portal, clear the check box next to the
correct virtual machine to avoid deleting the virtual machine.

SQL Server FCIs registered with the extension do not support features that require the
agent, such as automated backup, patching, and advanced portal management. See the
table of benefits.

MSDTC
Azure Virtual Machines support Microsoft Distributed Transaction Coordinator (MSDTC)
on Windows Server 2019 with storage on Clustered Shared Volumes (CSV) and Azure
Standard Load Balancer or on SQL Server VMs that are using Azure shared disks.

On Azure Virtual Machines, MSDTC isn't supported for Windows Server 2016 or earlier
with Clustered Shared Volumes because:

The clustered MSDTC resource can't be configured to use shared storage. On


Windows Server 2016, if you create an MSDTC resource, it won't show any shared
storage available for use, even if storage is available. This issue has been fixed in
Windows Server 2019.
The basic load balancer doesn't handle RPC ports.

Next steps
Review cluster configurations best practices, and then you can prepare your SQL Server
VM for FCI.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instance overview
Windows Server Failover Cluster with
SQL Server on Azure VMs
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

This article describes the differences when using the Windows Server Failover Cluster
feature with SQL Server on Azure VMs for high availability and disaster recovery (HADR),
such as for Always On availability groups (AG) or failover cluster instances (FCI).

To learn more about the Windows feature itself, see the Windows Server Failover Cluster
documentation.

Overview
SQL Server high availability solutions on Windows, such as Always On availability groups
(AG) or failover cluster instances (FCI) rely on the underlying Windows Server Failover
Clustering (WSFC) service.

The cluster service monitors network connections and the health of nodes in the cluster.
This monitoring is in addition to the health checks that SQL Server does as part of the
availability group or failover cluster instance feature. If the cluster service is unable to
reach the node, or if the AG or FCI role in the cluster becomes unhealthy, then the
cluster service initiates appropriate recovery actions to recover and bring applications
and services online, either on the same or on another node in the cluster.

Cluster health monitoring


In order to provide high availability, the cluster must ensure the health of the different
components that make up the clustered solution. The cluster service monitors the health
of the cluster based on a number of system and network parameters in order to detect
and respond to failures.

Setting the threshold for declaring a failure is important in order to achieve a balance
between promptly responding to a failure, and avoiding false failures.

There are two strategies for monitoring:

Monitoring Description
Monitoring Description

Aggressive Provides rapid failure detection and recovery of hard failures, which delivers the
highest levels of availability. The cluster service and SQL Server are both less
forgiving of transient failure and in some situations may prematurely fail over
resources when there are transient outages. Once failure is detected, the corrective
action that follows may take extra time.

Relaxed Provides more forgiving failure detection with a greater tolerance for brief transient
network issues. Avoids transient failures, but also introduces the risk of delaying
the detection of a true failure.

Aggressive settings in a cluster environment in the cloud may lead to premature failures
and longer outages, therefore a relaxed monitoring strategy is recommended for
failover clusters on Azure VMs. To adjust threshold settings, see cluster best practices for
more detail.

Cluster heartbeat
The primary settings that affect cluster heart beating and health detection between
nodes:

Setting Description

Delay This defines the frequency at which cluster heartbeats are sent between nodes. The
delay is the number of seconds before the next heartbeat is sent. Within the same
cluster there can be different delay settings configured between nodes on the same
subnet, and between nodes that are on different subnets.

Threshold The threshold is the number of heartbeats that can be missed before the cluster takes
recovery action. Within the same cluster there can be different threshold settings
configured between nodes on the same subnet, and between nodes that are on
different subnets.

The default values for these settings may be too low for cloud environments, and could
result in unnecessary failures due to transient network issues. To be more tolerant, use
relaxed threshold settings for failover clusters in Azure VMs. See cluster best practices
for more detail.

Quorum
Although a two-node cluster will function without a quorum resource, customers are
strictly required to use a quorum resource to have production support. Cluster
validation won't pass any cluster without a quorum resource.
Technically, a three-node cluster can survive a single node loss (down to two nodes)
without a quorum resource. But after the cluster is down to two nodes, there's a risk that
the clustered resources will go offline to prevent a split-brain scenario if a node is lost or
there's a communication failure between the nodes. Configuring a quorum resource will
allow the cluster resources to remain online with only one node online.

The disk witness is the most resilient quorum option, but to use a disk witness on a SQL
Server on Azure VM, you must use an Azure Shared Disk which imposes some
limitations to the high availability solution. As such, use a disk witness when you're
configuring your failover cluster instance with Azure Shared Disks, otherwise use a cloud
witness whenever possible.

The following table lists the quorum options available for SQL Server on Azure VMs:

Cloud witness Disk witness File share witness

Supported Windows Server All All


OS 2016+

Description A cloud witness is a A disk witness is a small A file share witness is an


type of failover cluster clustered disk in the Cluster SMB file share that's
quorum witness that Available Storage group. This typically configured on
uses Microsoft Azure disk is highly available and a file server running
to provide a vote on can fail over between nodes. It Windows Server. It
cluster quorum. The contains a copy of the cluster maintains clustering
default size is about 1 database, with a default size information in a
MB and contains just that's less than 1 GB. The disk witness.log file, but
the time stamp. A witness is the preferred doesn't store a copy of
cloud witness is ideal quorum option for any cluster the cluster database. In
for deployments in that uses Azure Shared Disks Azure, you can
multiple sites, multiple (or any shared-disk solution configure a file share on
zones, and multiple like shared SCSI, iSCSI, or fiber a separate virtual
regions. Use a cloud channel SAN). A Clustered machine within the
witness whenever Shared Volume cannot be same virtual network.
possible, unless you used as a disk witness. Use a file share witness
have a failover cluster Configure an Azure shared if a disk witness or
solution with shared disk as the disk witness. cloud witness is
storage. unavailable in your
environment.

To get started, see Configure cluster quorum.

Virtual network name (VNN)


To match the on-premises experience for connecting to your availability group listener
or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the
same virtual network. Having multiple subnets negates the need for the extra
dependency on an Azure Load Balancer to route traffic to your HADR solution. To learn
more, see Multi-subnet AG, and Multi-subnet FCI.

In a traditional on-premises environment, clustered resources such as failover cluster


instances or Always On availability groups rely on the Virtual Network Name to route
traffic to the appropriate target - either the failover cluster instance, or the listener of
the Always On availability group. The virtual name binds the IP address in DNS, and
clients can use either the virtual name or the IP address to connect to their high
availability target, regardless of which node currently owns the resource. The VNN is a
network name and address managed by the cluster, and the cluster service moves the
network address from node to node during a failover event. During a failure, the address
is taken offline on the original primary replica, and brought online on the new primary
replica.

On Azure Virtual Machines in a single subnet, an additional component is necessary to


route traffic from the client to the Virtual Network Name of the clustered resource
(failover cluster instance, or the listener of an availability group). In Azure, a load
balancer holds the IP address for the VNN that the clustered SQL Server resources rely
on and is necessary to route traffic to the appropriate high availability target. The load
balancer also detects failures with the networking components and moves the address
to a new host.

The load balancer distributes inbound flows that arrive at the front end, and then routes
that traffic to the instances defined by the back-end pool. You configure traffic flow by
using load-balancing rules and health probes. With SQL Server FCI, the back-end pool
instances are the Azure virtual machines running SQL Server, and with availability
groups, the back-end pool is the listener. There is a slight failover delay when you're
using the load balancer, because the health probe conducts alive checks every 10
seconds by default.

To get started, learn how to configure Azure Load Balancer for a failover cluster instance
or an availability group.

Supported OS: All

Supported SQL version: All

Supported HADR solution: Failover cluster instance, and availability group

Configuration of the VNN can be cumbersome, it's an additional source of failure, it can
cause a delay in failure detection, and there is an overhead and cost associated with
managing the additional resource. To address some of these limitations, SQL Server
introduced support for the Distributed Network Name feature.
Distributed network name (DNN)
To match the on-premises experience for connecting to your availability group listener
or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the
same virtual network. Having multiple subnets negates the need for the extra
dependency on a DNN to route traffic to your HADR solution. To learn more, see Multi-
subnet AG, and Multi-subnet FCI.

For SQL Server VMs deployed to a single subnet, the distributed network name feature
provides an alternative way for SQL Server clients to connect to the SQL Server failover
cluster instance or availability group listener without using a load balancer. The DNN
feature is available starting with SQL Server 2016 SP3 , SQL Server 2017 CU25 , SQL
Server 2019 CU8 , on Windows Server 2016 and later.

When a DNN resource is created, the cluster binds the DNS name with the IP addresses
of all the nodes in the cluster. The client will try to connect to each IP address in this list
to find which resource to connect to. You can accelerate this process by specifying
MultiSubnetFailover=True in the connection string. This setting tells the provider to try

all IP addresses in parallel, so the client can connect to the FCI or listener instantly.

A distributed network name is recommended over a load balancer when possible


because:

The end-to-end solution is more robust since you no longer have to maintain the
load balancer resource.
Eliminating the load balancer probes minimizes failover duration.
The DNN simplifies provisioning and management of the failover cluster instance
or availability group listener with SQL Server on Azure VMs.

Most SQL Server features work transparently with FCI and availability groups when using
the DNN, but there are certain features that may require special consideration.

Supported OS: Windows Server 2016 and later

Supported SQL version: SQL Server 2019 CU2 (FCI) and SQL Server 2019 CU8 (AG)

Supported HADR solution: Failover cluster instance, and availability group

To get started, learn to configure a distributed network name resource for a failover
cluster instance or an availability group.

There are additional considerations when using the DNN with other SQL Server features.
See FCI and DNN interoperability and AG and DNN interoperability to learn more.

Recovery actions
The cluster service takes corrective action when a failure is detected. This could restart
the resource on the existing node, or fail the resource over to another node. Once
corrective measures are initiated, they make take some time to complete.

For example, a restarted availability group comes online per the following sequence:

1. Listener IP comes online


2. Listener network name comes online
3. Availability group comes online
4. Individual databases go through recovery, which can take some time depending
on a number of factors, such as the length of the redo log. Connections are routed
by the listener only once the database is fully recovered. To learn more, see
Estimating failover time (RTO).

Since recovery could take some time, aggressive monitoring set to detect a failure in 20
seconds could result in an outage of minutes if a transient event occurs (such as
memory-preserving Azure VM maintenance). Setting the monitoring to a more relaxed
value of 40 seconds can help avoid a longer interruption of service.

To adjust threshold settings, see cluster best practices for more detail.

Node location
Nodes in a Windows cluster on virtual machines in Azure may be physically separated
within the same Azure region, or they can be in different regions. The distance may
introduce network latency, much like having cluster nodes spread between locations in
your own facilities would. In cloud environments, the difference is that within a region
you may not be aware of the distance between nodes. Moreover, some other factors like
physical and virtual components, number of hops, etc. can also contribute to increased
latency. If latency between the nodes is a concern, consider placing the nodes of the
cluster within a proximity placement group to guarantee network proximity.

Resource limits
When you configure an Azure VM, you determine the computing resources limits for the
CPU, memory, and IO. Workloads that require more resources than the purchased Azure
VM, or disk limits may cause VM performance issues. Performance degradation may
result in a failed health check for either the cluster service, or for the SQL Server high
availability feature. Resource bottlenecks may make the node or resource appear down
to the cluster or SQL Server.
Intensive SQL IO operations or maintenance operations such as backups, index, or
statistics maintenance could cause the VM or disk to reach IOPS or MBPS throughput
limits, which could make SQL Server unresponsive to an IsAlive/LooksAlive check.

If your SQL Server is experiencing unexpected failovers, check to make sure you are
following all performance best practices and monitor the server for disk or VM-level
capping.

Azure platform maintenance


Like any other cloud service, Azure periodically updates its platform to improve the
reliability, performance, and security of the host infrastructure for virtual machines. The
purpose of these updates ranges from patching software components in the hosting
environment to upgrading networking components or decommissioning hardware.

Most platform updates don't affect customer VMs. When a no-impact update isn't
possible, Azure chooses the update mechanism that's least impactful to customer VMs.
Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certain
cases, Azure uses memory-preserving maintenance mechanisms. These mechanisms
pause the VM for up to 30 seconds and preserve the memory in RAM. The VM is then
resumed, and its clock is automatically synchronized.

Memory-preserving maintenance works for more than 90 percent of Azure VMs. It


doesn't work for G, M, N, and H series. Azure increasingly uses live-migration
technologies and improves memory-preserving maintenance mechanisms to reduce the
pause durations. When the VM is live-migrated to a different host, some sensitive
workloads like SQL Server, might show a slight performance degradation in the few
minutes leading up to the VM pause.

A resource bottleneck during platform maintenance may make the AG or FCI appear
down to the cluster service. See the resource limits section of this article to learn more.

If you are using aggressive cluster monitoring, an extended VM pause may trigger a
failover. A failover will often cause more downtime than the maintenance pause, so it is
recommended to use relaxed monitoring to avoid triggering a failover while the VM is
paused for maintenance. See the cluster best practices for more information on setting
cluster thresholds in Azure VMs.

Limitations
Consider the following limitations when you're working with FCI or availability groups
and SQL Server on Azure Virtual Machines.
MSDTC
Azure Virtual Machines support Microsoft Distributed Transaction Coordinator (MSDTC)
on Windows Server 2019 with storage on Clustered Shared Volumes (CSV) and Azure
Standard Load Balancer or on SQL Server VMs that are using Azure shared disks.

On Azure Virtual Machines, MSDTC isn't supported for Windows Server 2016 or earlier
with Clustered Shared Volumes because:

The clustered MSDTC resource can't be configured to use shared storage. On


Windows Server 2016, if you create an MSDTC resource, it won't show any shared
storage available for use, even if storage is available. This issue has been fixed in
Windows Server 2019.
The basic load balancer doesn't handle RPC ports.

Next steps
Now that you've familiarized yourself with the differences when using a Windows
Failover Cluster with SQL Server on Azure VMs, learn about the high availability features
availability groups or failover cluster instances. If you're ready to get started, be sure to
review the best practices for configuration recommendations.
Checklist: Best practices for SQL Server
on Azure VMs
Article • 03/29/2023

Applies to:
SQL Server on Azure VM

This article provides a quick checklist as a series of best practices and guidelines to
optimize performance of your SQL Server on Azure Virtual Machines (VMs).

For comprehensive details, see the other articles in this series: VM size, Storage, Security,
HADR configuration, Collect baseline.

Enable SQL Assessment for SQL Server on Azure VMs and your SQL Server will be
evaluated against known best practices with results on the SQL VM management page
of the Azure portal.

For videos about the latest features to optimize SQL Server VM performance and
automate management, review the following Data Exposed videos:

Caching and Storage Capping (Ep. 1)


Automate Management with the SQL Server IaaS Agent extension (Ep. 2)
Use Azure Monitor Metrics to Track VM Cache Health (Ep. 3)
Get the best price-performance for your SQL Server workloads on Azure VM
Using PerfInsights to Evaluate Resource Health and Troubleshoot (Ep. 5)
Best Price-Performance with Ebdsv5 Series (Ep.6)
Optimally Configure SQL Server on Azure Virtual Machines with SQL Assessment
(Ep. 7)
New and Improved SQL Server on Azure VM deployment and management
experience (Ep.8)

Overview
While running SQL Server on Azure Virtual Machines, continue using the same database
performance tuning options that are applicable to SQL Server in on-premises server
environments. However, the performance of a relational database in a public cloud
depends on many factors, such as the size of a virtual machine, and the configuration of
the data disks.

There's typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure Virtual Machines. If your workload is less
demanding, you might not require every recommended optimization. Consider your
performance needs, costs, and workload patterns as you evaluate these
recommendations.

VM size
The checklist in this section covers the VM size best practices for SQL Server on Azure
VMs.

The new Ebdsv5-series provides the highest I/O throughput-to-vCore ratio in


Azure along with a memory-to-vCore ratio of 8. This series offers the best price-
performance for SQL Server workloads on Azure VMs. Consider this series first for
most SQL Server workloads.
Use VM sizes with 4 or more vCPUs like the E4ds_v5 or higher.
Use memory optimized virtual machine sizes for the best performance of SQL
Server workloads.
The Edsv5 series, the M-, and the Mv2- series offer the optimal memory-to-vCore
ratio required for OLTP workloads.
The M series VMs offer the highest memory-to-vCore ratio in Azure. Consider
these VMs for mission critical and data warehouse workloads.
Use Azure Marketplace images to deploy your SQL Server Virtual Machines as the
SQL Server settings and storage options are configured for optimal performance.
Collect the target workload's performance characteristics and use them to
determine the appropriate VM size for your business.
Use the Data Migration Assistant and SKU recommendation tools to find the
right VM size for your existing SQL Server workload.
Use Azure Data Studio to migrate to Azure.

Storage
The checklist in this section covers the storage best practices for SQL Server on Azure
VMs.

Monitor the application and determine storage bandwidth and latency


requirements for SQL Server data, log, and tempdb files before choosing the disk
type.
To optimize storage performance, plan for highest uncached IOPS available and
use data caching as a performance feature for data reads while avoiding virtual
machine and disks capping.
Place data, log, and tempdb files on separate drives.
For the data drive, use premium P30 and P40 or smaller disks to ensure the
availability of cache support
For the log drive plan for capacity and test performance versus cost while
evaluating the premium P30 - P80 disks
If submillisecond storage latency is required, use Azure ultra disks for the
transaction log.
For M-series virtual machine deployments consider write accelerator over
using Azure ultra disks.
Place tempdb on the local ephemeral SSD (default D:\ ) drive for most SQL
Server workloads that aren't part of Failover Cluster Instance (FCI) after choosing
the optimal VM size.
If the capacity of the local drive isn't enough for tempdb , consider sizing up
the VM. See Data file caching policies for more information.
For FCI place tempdb on the shared storage.
If the FCI workload is heavily dependent on tempdb disk performance, then as
an advanced configuration place tempdb on the local ephemeral SSD (default
D:\ ) drive, which isn't part of FCI storage. This configuration needs custom
monitoring and action to ensure the local ephemeral SSD (default D:\ ) drive
is available all the time as any failures of this drive won't trigger action from
FCI.
Stripe multiple Azure data disks using Storage Spaces to increase I/O bandwidth
up to the target virtual machine's IOPS and throughput limits.
Set host caching to read-only for data file disks.
Set host caching to none for log file disks.
Don't enable read/write caching on disks that contain SQL Server data or log
files.
Always stop the SQL Server service before changing the cache settings of your
disk.
For development and test workloads, and long-term backup archival consider
using standard storage. It isn't recommended to use Standard HDD/SSD for
production workloads.
Credit-based Disk Bursting (P1-P20) should only be considered for smaller dev/test
workloads and departmental systems.
To optimize storage performance, plan for highest uncached IOPS available, and
use data caching as a performance feature for data reads while avoiding virtual
machine and disks capping/throttling.
Format your data disk to use 64-KB allocation unit size for all data files placed on a
drive other than the temporary D:\ drive (which has a default of 4 KB). SQL Server
VMs deployed through Azure Marketplace come with data disks formatted with
allocation unit size and interleave for the storage pool set to 64 KB.
Configure the storage account in the same region as the SQL Server VM.
Disable Azure geo-redundant storage (geo-replication) and use LRS (local
redundant storage) on the storage account.
Enable the SQL Best Practices Assessment to identify possible performance issues
and evaluate that your SQL Server VM is configured to follow best practices.
Review and monitor disk and VM limits using Storage IO utilization metrics.
Exclude SQL Server files from antivirus software scanning. This includes data files,
log files, and backup files.

Security
The checklist in this section covers the security best practices for SQL Server on Azure
VMs.

SQL Server features and capabilities provide a method of security at the data level and is
how you achieve defense-in-depth at the infrastructure level for cloud-based and
hybrid solutions. In addition, with Azure security measures, it is possible to encrypt your
sensitive data, protect virtual machines from viruses and malware, secure network traffic,
identify and detect threats, meet compliance requirements, and provides a single
method for administration and reporting for any security need in the hybrid cloud.

Use Microsoft Defender for Cloud to evaluate and take action to improve the
security posture of your data environment. Capabilities such as Azure Advanced
Threat Protection (ATP) can be leveraged across your hybrid workloads to improve
security evaluation and give the ability to react to risks. Registering your SQL
Server VM with the SQL IaaS Agent extension surfaces Microsoft Defender for
Cloud assessments within the SQL virtual machine resource of the Azure portal.
Use Microsoft Defender for SQL to discover and mitigate potential database
vulnerabilities, as well as detect anomalous activities that could indicate a threat to
your SQL Server instance and database layer.
Vulnerability Assessment is a part of Microsoft Defender for SQL that can discover
and help remediate potential risks to your SQL Server environment. It provides
visibility into your security state, and includes actionable steps to resolve security
issues.
Use Azure confidential VMs to reinforce protection of your data in-use, and data-
at-rest against host operator access. Azure confidential VMs allow you to
confidently store your sensitive data in the cloud and meet strict compliance
requirements.
If you're on SQL Server 2022, consider using Azure Active Directory authentication
to connect to your instance of SQL Server.
Azure Advisor analyzes your resource configuration and usage telemetry and then
recommends solutions that can help you improve the cost effectiveness,
performance, high availability, and security of your Azure resources. Leverage
Azure Advisor at the virtual machine, resource group, or subscription level to help
identify and apply best practices to optimize your Azure deployments.
Use Azure Disk Encryption when your compliance and security needs require you
to encrypt the data end-to-end using your encryption keys, including encryption of
the ephemeral (locally attached temporary) disk.
Managed Disks are encrypted at rest by default using Azure Storage Service
Encryption, where the encryption keys are Microsoft-managed keys stored in
Azure.
For a comparison of the managed disk encryption options review the managed
disk encryption comparison chart
Management ports should be closed on your virtual machines - Open remote
management ports expose your VM to a high level of risk from internet-based
attacks. These attacks attempt to brute force credentials to gain admin access to
the machine.
Turn on Just-in-time (JIT) access for Azure virtual machines
Use Azure Bastion over Remote Desktop Protocol (RDP).
Lock down ports and only allow the necessary application traffic using Azure
Firewall which is a managed Firewall as a Service (FaaS) that grants/ denies server
access based on the originating IP address.
Use Network Security Groups (NSGs) to filter network traffic to, and from, Azure
resources on Azure Virtual Networks
Leverage Application Security Groups to group servers together with similar port
filtering requirements, with similar functions, such as web servers and database
servers.
For web and application servers leverage Azure Distributed Denial of Service
(DDoS) protection. DDoS attacks are designed to overwhelm and exhaust network
resources, making apps slow or unresponsive. It is common for DDos attacks to
target user interfaces. Azure DDoS protection sanitizes unwanted network traffic,
before it impacts service availability
Use VM extensions to help address anti-malware, desired state, threat detection,
prevention, and remediation to address threats at the operating system, machine,
and network levels:
Guest Configuration extension performs audit and configuration operations
inside virtual machines.
Network Watcher Agent virtual machine extension for Windows and Linux
monitors network performance, diagnostic, and analytics service that allows
monitoring of Azure networks.
Microsoft Antimalware Extension for Windows to help identify and remove
viruses, spyware, and other malicious software, with configurable alerts.
Evaluate 3rd party extensions such as Symantec Endpoint Protection for
Windows VM (/azure/virtual-machines/extensions/symantec)
Use Azure Policy to create business rules that can be applied to your environment.
Azure Policies evaluate Azure resources by comparing the properties of those
resources against rules defined in JSON format.
Azure Blueprints enables cloud architects and central information technology
groups to define a repeatable set of Azure resources that implements and adheres
to an organization's standards, patterns, and requirements. Azure Blueprints are
different than Azure Policies.

SQL Server features


The following is a quick checklist of best practices for SQL Server configuration settings
when running your SQL Server instances in an Azure virtual machine in production:

Enable database page compression where appropriate.


Enable backup compression.
Enable instant file initialization for data files.
Limit autogrowth of the database.
Disable autoshrink of the database.
Disable autoclose of the database.
Move all databases to data disks, including system databases.
Move SQL Server error log and trace file directories to data disks.
Configure default backup and database file locations.
Set max SQL Server memory limit to leave enough memory for the Operating
System. (Leverage Memory\Available Bytes to monitor the operating system
memory health).
Enable lock pages in memory.
Enable optimize for adhoc workloads for OLTP heavy environments.
Evaluate and apply the latest cumulative updates for the installed versions of SQL
Server.
Enable Query Store on all production SQL Server databases following best
practices.
Enable automatic tuning on mission critical application databases.
Ensure that all tempdb best practices are followed.
Use the recommended number of files, using multiple tempdb data files starting
with one file per core, up to eight files.
Schedule SQL Server Agent jobs to run DBCC CHECKDB, index reorganize, index
rebuild, and update statistics jobs.
Monitor and manage the health and size of the SQL Server transaction log file.
Take advantage of any new SQL Server features available for the version being
used.
Be aware of the differences in supported features between the editions you're
considering deploying.
Exclude SQL Server files from antivirus software scanning. This includes data files,
log files, and backup files.

Azure features
The following is a quick checklist of best practices for Azure-specific guidance when
running your SQL Server on Azure VM:

Register with the SQL IaaS Agent Extension to unlock a number of feature benefits.
Leverage the best backup and restore strategy for your SQL Server workload.
Ensure Accelerated Networking is enabled on the virtual machine.
Leverage Microsoft Defender for Cloud to improve the overall security posture of
your virtual machine deployment.
Leverage Microsoft Defender for Cloud, integrated with Microsoft Defender for
Cloud , for specific SQL Server VM coverage including vulnerability assessments,
and just-in-time access, which reduces the attack service while allowing legitimate
users to access virtual machines when necessary. To learn more, see vulnerability
assessments, enable vulnerability assessments for SQL Server VMs and just-in-time
access.
Leverage Azure Advisor to address performance, cost, reliability, operational
excellence, and security recommendations.
Leverage Azure Monitor to collect, analyze, and act on telemetry data from your
SQL Server environment. This includes identifying infrastructure issues with VM
insights and monitoring data with Log Analytics for deeper diagnostics.
Enable Autoshutdown for development and test environments.
Implement a high availability and disaster recovery (HADR) solution that meets
your business continuity SLAs, see the HADR options options available for SQL
Server on Azure VMs.
Use the Azure portal (support + troubleshooting) to evaluate resource health and
history; submit new support requests when needed.

HADR configuration
The checklist in this section covers the HADR best practices for SQL Server on Azure
VMs.

High availability and disaster recovery (HADR) features, such as the Always On
availability group and the failover cluster instance rely on underlying Windows Server
Failover Cluster technology. Review the best practices for modifying your HADR settings
to better support the cloud environment.

For your Windows cluster, consider these best practices:

Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to
route traffic to your HADR solution.
Change the cluster to less aggressive parameters to avoid unexpected outages
from transient network failures or Azure platform maintenance. To learn more, see
heartbeat and threshold settings. For Windows Server 2012 and later, use the
following recommended values:
SameSubnetDelay: 1 second
SameSubnetThreshold: 40 heartbeats
CrossSubnetDelay: 1 second
CrossSubnetThreshold: 40 heartbeats
Place your VMs in an availability set or different availability zones. To learn more,
see VM availability settings.
Use a single NIC per cluster node.
Configure cluster quorum voting to use 3 or more odd number of votes. Don't
assign votes to DR regions.
Carefully monitor resource limits to avoid unexpected restarts or failovers due to
resource constraints.
Ensure your OS, drivers, and SQL Server are at the latest builds.
Optimize performance for SQL Server on Azure VMs. Review the other sections
in this article to learn more.
Reduce or spread out workload to avoid resource limits.
Move to a VM or disk that his higher limits to avoid constraints.

For your SQL Server availability group or failover cluster instance, consider these best
practices:

If you're experiencing frequent unexpected failures, follow the performance best


practices outlined in the rest of this article.
If optimizing SQL Server VM performance doesn't resolve your unexpected
failovers, consider relaxing the monitoring for the availability group or failover
cluster instance. However, doing so may not address the underlying source of the
issue and could mask symptoms by reducing the likelihood of failure. You may still
need to investigate and address the underlying root cause. For Windows Server
2012 or higher, use the following recommended values:
Lease timeout: Use this equation to calculate the maximum lease time-out
value:

Lease timeout < (2 * SameSubnetThreshold * SameSubnetDelay) .

Start with 40 seconds. If you're using the relaxed SameSubnetThreshold and


SameSubnetDelay values recommended previously, don't exceed 80 seconds for

the lease timeout value.


Max failures in a specified period: Set this value to 6.
When using the virtual network name (VNN) and an Azure Load Balancer to
connect to your HADR solution, specify MultiSubnetFailover = true in the
connection string, even if your cluster only spans one subnet.
If the client doesn't support MultiSubnetFailover = True you may need to set
RegisterAllProvidersIP = 0 and HostRecordTTL = 300 to cache client

credentials for shorter durations. However, doing so may cause additional


queries to the DNS server.

To connect to your HADR solution using the distributed network name (DNN),
consider the following:
You must use a client driver that supports MultiSubnetFailover = True , and this
parameter must be in the connection string.
Use a unique DNN port in the connection string when connecting to the DNN
listener for an availability group.
Use a database mirroring connection string for a basic availability group to bypass
the need for a load balancer or DNN.
Validate the sector size of your VHDs before deploying your high availability
solution to avoid having misaligned I/Os. See KB3009974 to learn more.
If the SQL Server database engine, Always On availability group listener, or failover
cluster instance health probe are configured to use a port between 49,152 and
65,536 (the default dynamic port range for TCP/IP), add an exclusion for each port.
Doing so prevents other systems from being dynamically assigned the same port.
The following example creates an exclusion for port 59999:

netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1

store=persistent

Next steps
To learn more, see the other articles in this best practices series:
VM size
Storage
Security
HADR settings
Collect baseline

Consider enabling SQL Assessment for SQL Server on Azure VMs.

Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see the
Frequently Asked Questions.
VM size: Performance best practices for
SQL Server on Azure VMs
Article • 03/29/2023

Applies to:
SQL Server on Azure VM

This article provides VM size guidance a series of best practices and guidelines to
optimize performance for your SQL Server on Azure Virtual Machines (VMs).

There's typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure Virtual Machines. If your workload is less
demanding, you might not require every recommended optimization. Consider your
performance needs, costs, and workload patterns as you evaluate these
recommendations.

For comprehensive details, see the other articles in this series: Checklist, Storage,
Security, HADR configuration, Collect baseline.

Checklist
Review the following checklist for a brief overview of the VM size best practices that the
rest of the article covers in greater detail:

The new Ebdsv5-series provides the highest I/O throughput-to-vCore ratio in


Azure along with a memory-to-vCore ratio of 8. This series offers the best price-
performance for SQL Server workloads on Azure VMs. Consider this series first for
most SQL Server workloads.
Use VM sizes with 4 or more vCPUs like the E4ds_v5 or higher.
Use memory optimized virtual machine sizes for the best performance of SQL
Server workloads.
The Edsv5 series, the M-, and the Mv2- series offer the optimal memory-to-vCore
ratio required for OLTP workloads.
The M series VMs offer the highest memory-to-vCore ratio in Azure. Consider
these VMs for mission critical and data warehouse workloads.
Use Azure Marketplace images to deploy your SQL Server Virtual Machines as the
SQL Server settings and storage options are configured for optimal performance.
Collect the target workload's performance characteristics and use them to
determine the appropriate VM size for your business.
Use the Data Migration Assistant and SKU recommendation tools to find the
right VM size for your existing SQL Server workload.
Use Azure Data Studio to migrate to Azure.

To compare the VM size checklist with the others, see the comprehensive Performance
best practices checklist.

Overview
When you're creating a SQL Server on Azure VM, carefully consider the type of workload
necessary. If you're migrating an existing environment, collect a performance baseline to
determine your SQL Server on Azure VM requirements. If this is a new VM, then create
your new SQL Server VM based on your vendor requirements.

If you're creating a new SQL Server VM with a new application built for the cloud, you
can easily size your SQL Server VM as your data and usage requirements evolve.
Start
the development environments with the lower-tier D-Series, B-Series, or Av2-series and
grow your environment over time.

Use the SQL Server VM marketplace images with the storage configuration in the portal.
This makes it easier to properly create the storage pools necessary to get the size, IOPS,
and throughput necessary for your workloads. It is important to choose SQL Server VMs
that support premium storage and premium storage caching. See the storage article to
learn more.

Currently, the Ebdsv5-series provides the highest I/O throughput-to-vCore ratio


available in Azure. If you don't know the I/O requirements for your SQL Server workload,
this series is the one most likely to meet your needs. See the storage article to learn
more.

7 Note

The larger Ebdsv5-series sizes (48 vCPUs and larger) offer support for NVMe
enabled storage access. In order to take advantage of this high I/O performance,
you must deploy your virtual machine using NVMe. NVMe support for SQL Server
marketplace images will be coming soon, but for now you must self-install SQL
Server in order to take advantage of NVMe.

SQL Server data warehouse and mission critical environments will often need to scale
beyond the 8 memory-to-vCore ratio. For medium environments, you may want to
choose a 16 memory-to-vCore ratio, and a 32 memory-to-vCore ratio for larger data
warehouse environments.

SQL Server data warehouse environments often benefit from the parallel processing of
larger machines. For this reason, the M-series and the Mv2-series are good options for
larger data warehouse environments.

Use the vCPU and memory configuration from your source machine as a baseline for
migrating a current on-premises SQL Server database to SQL Server on Azure VMs. If
you have Software Assurance, take advantage of Azure Hybrid Benefit to bring your
licenses to Azure and save on SQL Server licensing costs.

Memory optimized
The memory optimized virtual machine sizes are a primary target for SQL Server VMs
and the recommended choice by Microsoft. The memory optimized virtual machines
offer stronger memory-to-CPU ratios and medium-to-large cache options.

Ebdsv5-series
The Ebdsv5-series is a new memory-optimized series of VMs that offer the highest
remote storage throughput available in Azure. These VMs have a memory-to-vCore
ratio of 8 which, together with the high I/O throughput, makes them ideal for SQL
Server workloads. The Ebdsv5-series VMs offer the best price-performance for SQL
Server workloads running on Azure virtual machines and we strongly recommend them
for most of your production SQL Server workloads.

Edsv5-series
The Edsv5-series is designed for memory-intensive applications and is ideal for SQL
Server workloads that don't require as high I/O throughput as the Ebdsv5 series offers.
These VMs have a large local storage SSD capacity, up to 672 GiB of RAM, and very high
local and remote storage throughput. There's a nearly consistent 8 GiB of memory per
vCore across most of these virtual machines, which is ideal for most SQL Server
workloads.

The largest virtual machine in this group is the Standard_E104ids_v5 that offers 104
vCores and 672 GiBs of memory. This virtual machine is notable because it's isolated
which means it's guaranteed to be the only virtual machine running on the host, and
therefore is isolated from other customer workloads. This has a memory-to-vCore ratio
that is lower than what is recommended for SQL Server, so it should only be used if
isolation is required.

The Edsv5-series virtual machines support premium storage, and premium storage
caching.

ECadsv5-series
The ECadsv5-series virtual machine sizes are memory-optimized Azure confidential
VMs with a temporary disk. Review confidential VMs for information about the security
benefits of Azure confidential VMs.

As the security features of Azure confidential VMs may introduce performance


overheads, test your workload and select a VM size that meets your performance
requirements.

M and Mv2 series


The M-series offers vCore counts and memory for some of the largest SQL Server
workloads.

The Mv2-series has the highest vCore counts and memory and is recommended for
mission critical and data warehouse workloads. Mv2-series instances are memory
optimized VM sizes providing unparalleled computational performance to support large
in-memory databases and workloads with a high memory-to-CPU ratio that is perfect
for relational database servers, large caches, and in-memory analytics.

Some of the features of the M and Mv2-series attractive for SQL Server performance
include premium storage and premium storage caching support, ultra-disk support, and
write acceleration.

General Purpose
The General Purpose virtual machine sizes are designed to provide balanced memory-
to-vCore ratios for smaller entry level workloads such as development and test, web
servers, and smaller database servers.

Because of the smaller memory-to-vCore ratios with the General Purpose virtual
machines, it's important to carefully monitor memory-based performance counters to
ensure SQL Server is able to get the buffer cache memory it needs. See memory
performance baseline for more information.
Since the starting recommendation for production workloads is a memory-to-vCore
ratio of 8, the minimum recommended configuration for a General Purpose VM running
SQL Server is 4 vCPU and 32 GiB of memory.

Ddsv5 series
The Ddsv5-series offers a fair combination of vCPU, memory, and temporary disk but
with smaller memory-to-vCore support.

The Ddsv5 VMs include lower latency and higher-speed local storage.

These machines are ideal for side-by-side SQL and app deployments that require fast
access to temp storage and departmental relational databases. There's a standard
memory-to-vCore ratio of 4 across all of the virtual machines in this series.

For this reason, it's recommended to use the D8ds_v5 as the starter virtual machine in
this series, which has 8 vCores and 32 GiBs of memory. The largest machine is the
D96ds_v5, which has 96 vCores and 256 GiBs of memory.

The Ddsv5-series virtual machines support premium storage and premium storage
caching.

7 Note

The Ddsv5-series does not have the memory-to-vCore ratio of 8 that is


recommended for SQL Server workloads. As such, consider using these virtual
machines for small applications and development workloads only.

DCadsv5-series
The DCadsv5-series virtual machine sizes are general purpose Azure confidential VMs
with temporary disk. Review confidential VMs for information about the security benefits
of Azure confidential VMs.

As the security features of Azure confidential VMs may introduce performance


overheads, test your workload and select a VM size that meets your performance
requirements.

B-series
The burstable B-series virtual machine sizes are ideal for workloads that don't need
consistent performance such as proof of concept and very small application and
development servers.

Most of the burstable B-series virtual machine sizes have a memory-to-vCore ratio of 4.
The largest of these machines is the Standard_B20ms with 20 vCores and 80 GiB of
memory.

This series is unique as the apps have the ability to burst during business hours with
burstable credits varying based on machine size.

When the credits are exhausted, the VM returns to the baseline machine performance.

The benefit of the B-series is the compute savings you could achieve compared to the
other VM sizes in other series especially if you need the processing power sparingly
throughout the day.

This series supports premium storage, but does not support premium storage caching.

7 Note

The burstable B-series does not have the memory-to-vCore ratio of 8 that is
recommended for SQL Server workloads. As such, consider using these virtual
machines for smaller applications, web servers, and development workloads only.

Av2-series
The Av2-series VMs are best suited for entry-level workloads like development and test,
low traffic web servers, small to medium app databases, and proof-of-concepts.

Only the Standard_A2m_v2 (2 vCores and 16GiBs of memory), Standard_A4m_v2 (4


vCores and 32GiBs of memory), and the Standard_A8m_v2 (8 vCores and 64GiBs of
memory) have a good memory-to-vCore ratio of 8 for these top three virtual machines.

These virtual machines are both good options for smaller development and test SQL
Server machines.

The 8 vCore Standard_A8m_v2 may also be a good option for small application and web
servers.

7 Note

The Av2 series does not support premium storage and as such, is not
recommended for production SQL Server workloads even with the virtual machines
that have a memory-to-vCore ratio of 8.
Storage optimized
The storage optimized VM sizes are for specific use cases. These virtual machines are
specifically designed with optimized disk throughput and IO.

Lsv2-series
The Lsv2-series features high throughput, low latency, and local NVMe storage. The
Lsv2-series VMs are optimized to use the local disk on the node attached directly to the
VM rather than using durable data disks.

These virtual machines are strong options for big data, data warehouse, reporting, and
ETL workloads. The high throughput and IOPS of the local NVMe storage is a good use
case for processing files that will be loaded into your database and other scenarios
where the data can be recreated from the source system or other repositories such as
Azure Blob storage or Azure Data Lake. Lsv2-series VMs can also burst their disk
performance for up to 30 minutes at a time.

These virtual machines size from 8 to 80 vCPU with 8 GiB of memory per vCPU and for
every 8 vCPUs there is 1.92 TB of NVMe SSD. This means for the largest VM of this
series, the L80s_v2, there is 80 vCPU and 640 BiB of memory with 10x1.92TB of NVMe
storage. There's a consistent memory-to-vCore ratio of 8 across all of these virtual
machines.

The NVMe storage is ephemeral meaning that data will be lost on these disks if you
deallocate your virtual machine, or if it's moved to a different host for service healing.

The Lsv2 and Ls series support premium storage, but not premium storage caching. The
creation of a local cache to increase IOPs is not supported.

2 Warning

Storing your data files on the ephemeral NVMe storage could result in data loss
when the VM is deallocated.

Constrained vCores
High performing SQL Server workloads often need larger amounts of memory, IOPS,
and throughput without the higher vCore counts.
Most OLTP workloads are application databases driven by large numbers of smaller
transactions. With OLTP workloads, only a small amount of the data is read or modified,
but the volumes of transactions driven by user counts are much higher. It is important
to have the SQL Server memory available to cache plans, store recently accessed data
for performance, and ensure physical reads can be read into memory quickly.

These OLTP environments need higher amounts of memory, fast storage, and the I/O
bandwidth necessary to perform optimally.

In order to maintain this level of performance without the higher SQL Server licensing
costs, Azure offers VM sizes with constrained vCPU counts.

This helps control licensing costs by reducing the available vCores while maintaining the
same memory, storage, and I/O bandwidth of the parent virtual machine.

The vCPU count can be constrained to one-half to one-quarter of the original VM size.
Reducing the vCores available to the virtual machine achieves higher memory-to-vCore
ratios, but the compute cost will remain the same.

These new VM sizes have a suffix that specifies the number of active vCPUs to make
them easier to identify.

For example, the M64-32ms requires licensing only 32 SQL Server vCores with the
memory, I/O, and throughput of the M64ms and the M64-16ms requires licensing only
16 vCores. Though while the M64-16ms has a quarter of the SQL Server licensing cost of
the M64ms, the compute cost of the virtual machines is the same.

7 Note

Medium to large data warehouse workloads may still benefit from


constrained vCore VMs, but data warehouse workloads are commonly
characterized by fewer users and processes addressing larger amounts of data
through query plans that run in parallel.
The compute cost, which includes operating system licensing, will remain the
same as the parent virtual machine.

Next steps
To learn more, see the other articles in this best practices series:

Quick checklist
Storage

Security

HADR settings

Collect baseline

For security best practices, see Security considerations for SQL Server on Azure
Virtual Machines.

Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see
the Frequently Asked Questions.
Storage: Performance best practices for
SQL Server on Azure VMs
Article • 06/22/2023

Applies to:
SQL Server on Azure VM

This article provides storage best practices and guidelines to optimize performance for
your SQL Server on Azure Virtual Machines (VM).

There's typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure VMs. If your workload is less demanding, you
might not require every recommended optimization. Consider your performance needs,
costs, and workload patterns as you evaluate these recommendations.

To learn more, see the other articles in this series: Checklist, VM size, Security, HADR
configuration, and Collect baseline.

Checklist
Review the following checklist for a brief overview of the storage best practices that the
rest of the article covers in greater detail:

Monitor the application and determine storage bandwidth and latency


requirements for SQL Server data, log, and tempdb files before choosing the disk
type.
To optimize storage performance, plan for highest uncached IOPS available and
use data caching as a performance feature for data reads while avoiding virtual
machine and disks capping.
Place data, log, and tempdb files on separate drives.
For the data drive, use premium P30 and P40 or smaller disks to ensure the
availability of cache support. When using the Ebdsv5 VM series, use Premium
SSD v2 which provides better price-performance for workloads that require high
IOPS and I/O throughput.
For the log drive plan for capacity and test performance versus cost while
evaluating either Premium SSD v2 or Premium SSD P30 - P80 disks
If submillisecond storage latency is required, use either Premium SSD v2 or
Azure ultra disks for the transaction log.
For M-series virtual machine deployments consider write accelerator over
using Azure ultra disks.
Place tempdb on the local ephemeral SSD (default D:\ ) drive for most SQL
Server workloads that aren't part of Failover Cluster Instance (FCI) after choosing
the optimal VM size.
If the capacity of the local drive isn't enough for tempdb , consider sizing up
the VM. See Data file caching policies for more information.
For FCI place tempdb on the shared storage.
If the FCI workload is heavily dependent on tempdb disk performance, then as
an advanced configuration place tempdb on the local ephemeral SSD (default
D:\ ) drive, which isn't part of FCI storage. This configuration needs custom
monitoring and action to ensure the local ephemeral SSD (default D:\ ) drive
is available all the time as any failures of this drive won't trigger action from
FCI.
Stripe multiple Azure data disks using Storage Spaces to increase I/O bandwidth
up to the target virtual machine's IOPS and throughput limits.
Set host caching to read-only for data file disks.
Set host caching to none for log file disks.
Don't enable read/write caching on disks that contain SQL Server data or log
files.
Always stop the SQL Server service before changing the cache settings of your
disk.
For development and test workloads, and long-term backup archival consider
using standard storage. It isn't recommended to use Standard HDD/SSD for
production workloads.
Credit-based Disk Bursting (P1-P20) should only be considered for smaller dev/test
workloads and departmental systems.
To optimize storage performance, plan for highest uncached IOPS available, and
use data caching as a performance feature for data reads while avoiding virtual
machine and disks capping/throttling.
Format your data disk to use 64-KB allocation unit size for all data files placed on a
drive other than the temporary D:\ drive (which has a default of 4 KB). SQL Server
VMs deployed through Azure Marketplace come with data disks formatted with
allocation unit size and interleave for the storage pool set to 64 KB.
Configure the storage account in the same region as the SQL Server VM.
Disable Azure geo-redundant storage (geo-replication) and use LRS (local
redundant storage) on the storage account.
Enable the SQL Best Practices Assessment to identify possible performance issues
and evaluate that your SQL Server VM is configured to follow best practices.
Review and monitor disk and VM limits using Storage IO utilization metrics.
Exclude SQL Server files from antivirus software scanning. This includes data files,
log files, and backup files.
To compare the storage checklist with the other best practices, see the comprehensive
Performance best practices checklist.

Overview
To find the most effective configuration for SQL Server workloads on an Azure VM, start
by measuring the storage performance of your business application. Once storage
requirements are known, select a virtual machine that supports the necessary IOPS and
throughput with the appropriate memory-to-vCore ratio.

Choose a VM size with enough storage scalability for your workload and a mixture of
disks (usually in a storage pool) that meet the capacity and performance requirements
of your business.

The type of disk depends on both the file type that's hosted on the disk and your peak
performance requirements.

 Tip

Provisioning a SQL Server VM through the Azure portal helps guide you through
the storage configuration process and implements most storage best practices
such as creating separate storage pools for your data and log files, targeting
tempdb to the D:\ drive, and enabling the optimal caching policy. For more

information about provisioning and configuring storage, see SQL VM storage


configuration.

VM disk types
You have a choice in the performance level for your disks. The types of managed disks
available as underlying storage (listed by increasing performance capabilities) are
Standard hard disk drives (HDD), Standard solid-state drives (SSD), Premium SSDs,
Premium SSD v2, and Ultra Disks.

For Standard HDDs, Standard SSDs, and Premium SSDs, the performance of the disk
increases with the size of the disk, grouped by premium disk labels such as the P1 with 4
GiB of space and 120 IOPS to the P80 with 32 TiB of storage and 20,000 IOPS. Premium
storage supports a storage cache that helps improve read and write performance for
some workloads. For more information, see Managed disks overview.

The performance of Premium SSD v2 and Ultra Disks can be changed independently of
the size of the disk, for details see Ultra disk performance and Premium SSD v2
performance.

There are also three main disk roles to consider for your SQL Server on Azure VM - an
OS disk, a temporary disk, and your data disks. Carefully choose what is stored on the
operating system drive (C:\) and the ephemeral temporary drive (D:\) .

Operating system disk


An operating system disk is a VHD that can be booted and mounted as a running
version of an operating system and is labeled as the C:\ drive. When you create an
Azure VM, the platform attaches at least one disk to the VM for the operating system
disk. The C:\ drive is the default location for application installs and file configuration.

For production SQL Server environments, don't use the operating system disk for data
files, log files, error logs.

Temporary disk
Many Azure VMs contain another disk type called the temporary disk (labeled as the
D:\ drive). Depending on the VM series and size the capacity of this disk will vary. The

temporary disk is ephemeral, which means the disk storage is recreated (as in, it's
deallocated and allocated again), when the VM is restarted, or moved to a different host
(for service healing, for example).

The temporary storage drive isn't persisted to remote storage and therefore shouldn't
store user database files, transaction log files, or anything that must be preserved.

Place tempdb on the local temporary SSD D:\ drive for SQL Server workloads unless
consumption of local cache is a concern. If you're using a VM that doesn't have a
temporary disk then it's recommended to place tempdb on its own isolated disk or
storage pool with caching set to read-only. To learn more, see tempdb data caching
policies.

Data disks
Data disks are remote storage disks that are often created in storage pools in order to
exceed the capacity and performance that any single disk could offer to the VM.

Attach the minimum number of disks that satisfies the IOPS, throughput, and capacity
requirements of your workload. Don't exceed the maximum number of data disks of the
smallest VM you plan to resize to.
Place data and log files on data disks provisioned to best suit performance
requirements.

Format your data disk to use 64-KB allocation unit size for all data files placed on a drive
other than the temporary D:\ drive (which has a default of 4 KB). SQL Server VMs
deployed through Azure Marketplace come with data disks formatted with allocation
unit size and interleave for the storage pool set to 64 KB.

7 Note

It's also possible to host your SQL Server database files directly on Azure Blob
storage or on SMB storage such as Azure premium file share, but we recommend
using Azure managed disks for the best performance, reliability, and feature
availability.

Premium SSD v2
You should use Premium SSD v2 disks when running SQL Server workloads in supported
regions, if the current limitations are suitable for your environment. Depending on your
configuration, Premium SSD v2 can be cheaper than Premium SSDs, while also providing
performance improvements. With Premium SSD v2, you can individually adjust your
throughput or IOPS independently from the size of your disk. Being able to individually
adjust performance options allows for this larger cost savings and allows you to script
changes to meet performance requirements during anticipated or known periods of
need. We recommend using Premium SSD v2 when using the Ebdsv5 VM series as it is a
more cost-effective solution for these high I/O throughput machines. Premium SSD v2
doesn't currently support host caching, so choosing a VM size with high uncached
throughput such as the Ebdsv5 series VMs is recommended.

Premium SSD v2 disks aren't currently supported by SQL Server gallery images, but they
can be used with SQL Server on Azure VMs when configured manually.

Premium SSD
Use Premium SSDs for data and log files for production SQL Server workloads. Premium
SSD IOPS and bandwidth vary based on the disk size and type.

For production workloads, use the P30 and/or P40 disks for SQL Server data files to
ensure caching support and use the P30 up to P80 for SQL Server transaction log files.
For the best total cost of ownership, start with P30s (5000 IOPS/200 MBPS) for data and
log files and only choose higher capacities when you need to control the VM disk count.
For dev/test or small systems you can choose to use sizes smaller than P30 as these do
support caching, but they don't offer reserved pricing.

For OLTP workloads, match the target IOPS per disk (or storage pool) with your
performance requirements using workloads at peak times and the Disk Reads/sec +
Disk Writes/sec performance counters. For data warehouse and reporting workloads,

match the target throughput using workloads at peak times and the Disk Read
Bytes/sec + Disk Write Bytes/sec .

Use Storage Spaces to achieve optimal performance, configure two pools, one for the
log file(s) and the other for the data files. If you aren't using disk striping, use two
premium SSD disks mapped to separate drives, where one drive contains the log file and
the other contains the data.

The provisioned IOPS and throughput per disk that is used as part of your storage pool.
The combined IOPS and throughput capabilities of the disks is the maximum capability
up to the throughput limits of the VM.

The best practice is to use the least number of disks possible while meeting the minimal
requirements for IOPS (and throughput) and capacity. However, the balance of price and
performance tends to be better with a large number of small disks rather than a small
number of large disks.

Scale premium disks


The size of your Premium SSD determines the initial performance tier of your disk.
Designate the performance tier at deployment or change it afterwards, without
changing the size of the disk. If demand increases, you can increase the performance
level to meet your business needs.

Changing the performance tier allows administrators to prepare for and meet higher
demand without relying on disk bursting.

Use the higher performance for as long as needed where billing is designed to meet the
storage performance tier. Upgrade the tier to match the performance requirements
without increasing the capacity. Return to the original tier when the extra performance is
no longer required.

This cost-effective and temporary expansion of performance is a strong use case for
targeted events such as shopping, performance testing, training events and other brief
windows where greater performance is needed only for a short term.
For more information, see Performance tiers for managed disks.

Azure ultra disk


If there's a need for submillisecond response times with reduced latency consider using
Azure ultra disk for the SQL Server log drive, or even the data drive for applications that
are extremely sensitive to I/O latency.

Ultra disk can be configured where capacity and IOPS can scale independently. With
ultra disk administrators can provision a disk with the capacity, IOPS, and throughput
requirements based on application needs.

Ultra disk isn't supported on all VM series and has other limitations such as region
availability, redundancy, and support for Azure Backup. To learn more, see Using Azure
ultra disks for a full list of limitations.

Standard HDDs and SSDs


Standard HDDs and SSDs have varying latencies and bandwidth and are only
recommended for dev/test workloads. Production workloads should use Premium SSD
v2 or Premium SSDs. If you're using Standard SSD (dev/test scenarios), the
recommendation is to add the maximum number of data disks supported by your VM
size and use disk striping with Storage Spaces for the best performance.

Caching
VMs that support premium storage caching can take advantage of an additional feature
called the Azure BlobCache or host caching to extend the IOPS and throughput
capabilities of a VM. VMs enabled for both premium storage and premium storage
caching have these two different storage bandwidth limits that can be used together to
improve storage performance.

The IOPS and MBps throughput without caching counts against a VM's uncached disk
throughput limits. The maximum cached limits provide another buffer for reads that
helps address growth and unexpected peaks.

Enable premium caching whenever the option is supported to significantly improve


performance for reads against the data drive without extra cost.

Reads and writes to the Azure BlobCache (cached IOPS and throughput) don't count
against the uncached IOPS and throughput limits of the VM.
7 Note

Disk Caching is not supported for disks 4 TiB and larger (P50 and larger). If multiple
disks are attached to your VM, each disk that is smaller than 4 TiB will support
caching. For more information, see Disk caching.

Uncached throughput
The max uncached disk IOPS and throughput is the maximum remote storage limit that
the VM can handle. This limit is defined at the VM and isn't a limit of the underlying disk
storage. This limit applies only to I/O against data drives remotely attached to the VM,
not the local I/O against the temp drive ( D:\ drive) or the OS drive.

The amount of uncached IOPS and throughput that is available for a VM can be verified
in the documentation for your VM.

For example, the M-series documentation shows that the max uncached throughput for
the Standard_M8ms VM is 5000 IOPS and 125 MBps of uncached disk throughput.

Likewise, you can see that the Standard_M32ts supports 20,000 uncached disk IOPS and
500-MBps uncached disk throughput. This limit is governed at the VM level regardless
of the underlying premium disk storage.

For more information, see uncached and cached limits.

Cached and temp storage throughput


The max cached and temp storage throughput limit is a separate limit from the
uncached throughput limit on the VM. The Azure BlobCache consists of a combination
of the VM host's random-access memory and locally attached SSD. The temp drive ( D:\
drive) within the VM is also hosted on this local SSD.

The max cached and temp storage throughput limit governs the I/O against the local
temp drive ( D:\ drive) and the Azure BlobCache only if host caching is enabled.
When caching is enabled on premium storage, VMs can scale beyond the limitations of
the remote storage uncached VM IOPS and throughput limits.

Only certain VMs support both premium storage and premium storage caching (which
needs to be verified in the virtual machine documentation). For example, the M-series
documentation indicates that both premium storage, and premium storage caching is
supported:

The limits of the cache vary based on the VM size. For example, the Standard_M8ms VM
supports 10000 cached disk IOPS and 1000-MBps cached disk throughput with a total
cache size of 793 GiB. Similarly, the Standard_M32ts VM supports 40000 cached disk
IOPS and 400-MBps cached disk throughput with a total cache size of 3174 GiB.

You can manually enable host caching on an existing VM. Stop all application workloads
and the SQL Server services before any changes are made to your VM's caching policy.
Changing any of the VM cache settings results in the target disk being detached and
reattached after the settings are applied.

Data file caching policies


Your storage caching policy varies depending on the type of SQL Server data files that
are hosted on the drive.
The following table provides a summary of the recommended caching policies based on
the type of SQL Server data:

SQL Server Recommendation


disk

Data disk Enable Read-only caching for the disks hosting SQL Server data files.

Reads from cache will be faster than the uncached reads from the data disk.

Uncached IOPS and throughput plus Cached IOPS and throughput yield the total
possible performance available from the VM within the VMs limits, but actual
performance varies based on the workload's ability to use the cache (cache hit
ratio).

Transaction Set the caching policy to None for disks hosting the transaction log. There's no
log disk performance benefit to enabling caching for the Transaction log disk, and in fact
having either Read-only or Read/Write caching enabled on the log drive can
degrade performance of the writes against the drive and decrease the amount of
cache available for reads on the data drive.

Operating The default caching policy is Read/write for the OS drive.

OS disk It isn't recommended to change the caching level of the OS drive.

tempdb If tempdb can't be placed on the ephemeral drive D:\ due to capacity reasons,
either resize the VM to get a larger ephemeral drive or place tempdb on a separate
data drive with Read-only caching configured.

The VM cache and ephemeral drive both use the local SSD, so keep this in mind
when sizing as tempdb I/O will count against the cached IOPS and throughput VM
limits when hosted on the ephemeral drive.

) Important

Changing the cache setting of an Azure disk detaches and reattaches the target
disk. When changing the cache setting for a disk that hosts SQL Server data, log, or
application files, be sure to stop the SQL Server service along with any other related
services to avoid data corruption.

To learn more, see Disk caching.

Disk striping
Analyze the throughput and bandwidth required for your SQL data files to determine
the number of data disks, including the log file and tempdb . Throughput and bandwidth
limits vary by VM size. To learn more, see VM Size
Add more data disks and use disk striping for more throughput. For example, an
application that needs 12,000 IOPS and 180-MB/s throughput can use three striped P30
disks to deliver 15,000 IOPS and 600-MB/s throughput.

To configure disk striping, see disk striping.

Disk capping
There are throughput limits at both the disk and VM level. The maximum IOPS limits per
VM and per disk differ and are independent of each other.

Applications that consume resources beyond these limits will be throttled (also known
as capped). Select a VM and disk size in a disk stripe that meets application
requirements and won't face capping limitations. To address capping, use caching, or
tune the application so that less throughput is required.

For example, an application that needs 12,000 IOPS and 180 MB/s can:

Use the Standard_M32ms, which has a maximum uncached disk throughput of


20,000 IOPS and 500 MBps.
Stripe three P30 disks to deliver 15,000 IOPS and 600-MB/s throughput.
Use a Standard_M16ms VM and use host caching to utilize local cache over
consuming throughput.

VMs configured to scale up during times of high utilization should provision storage
with enough IOPS and throughput to support the maximum VM size while keeping the
overall number of disks less than or equal to the maximum number supported by the
smallest VM SKU targeted to be used.

For more information on disk capping limitations and using caching to avoid capping,
see Disk IO capping.

7 Note

Some disk capping may still result in satisfactory performance to users; tune and
maintain workloads rather than resize to a larger VM to balance managing cost and
performance for the business.

Write Acceleration
Write Acceleration is a disk feature that is only available for the M-Series VMs. The
purpose of Write Acceleration is to improve the I/O latency of writes against Azure
Premium Storage when you need single digit I/O latency due to high volume mission
critical OLTP workloads or data warehouse environments.

Use Write Acceleration to improve write latency to the drive hosting the log files. Don't
use Write Acceleration for SQL Server data files.

Write Accelerator disks share the same IOPS limit as the VM. Attached disks can't exceed
the Write Accelerator IOPS limit for a VM.

The following table outlines the number of data disks and IOPS supported per VM:

VM SKU # Write Accelerator disks Write Accelerator disk IOPS per VM

M416ms_v2, M416s_v2 16 20000

M128ms, M128s 16 20000

M208ms_v2, M208s_v2 8 10000

M64ms, M64ls, M64s 8 10000

M32ms, M32ls, M32ts, 4 5000


M32s

M16ms, M16s 2 2500

M8ms, M8s 1 1250

There are several restrictions to using Write Acceleration. To learn more, see Restrictions
when using Write Accelerator.

Compare to Azure ultra disk


The biggest difference between Write Acceleration and Azure ultra disks is that Write
Acceleration is a VM feature only available for the M-Series and Azure ultra disks is a
storage option. Write Acceleration is a write-optimized cache with its own limitations
based on the VM size. Azure ultra disks are a low latency disk storage option for Azure
VMs.

If possible, use Write Acceleration over ultra disks for the transaction log disk. For VMs
that don't support Write Acceleration but require low latency to the transaction log, use
Azure ultra disks.
Monitor storage performance
To assess storage needs, and determine how well storage is performing, you need to
understand what to measure, and what those indicators mean.

IOPS (Input/Output per second) is the number of requests the application is making to
storage per second. Measure IOPS using Performance Monitor counters Disk Reads/sec
and Disk Writes/sec . OLTP (Online transaction processing) applications need to drive
higher IOPS in order to achieve optimal performance. Applications such as payment
processing systems, online shopping, and retail point-of-sale systems are all examples
of OLTP applications.

Throughput is the volume of data that is being sent to the underlying storage, often
measured by megabytes per second. Measure throughput with the Performance
Monitor counters Disk Read Bytes/sec and Disk Write Bytes/sec . Data warehousing is
optimized around maximizing throughput over IOPS. Applications such as data stores
for analysis, reporting, ETL workstreams, and other business intelligence targets are all
examples of data warehousing applications.

I/O unit sizes influence IOPS and throughput capabilities as smaller I/O sizes yield higher
IOPS and larger I/O sizes yield higher throughput. SQL Server chooses the optimal I/O
size automatically. For more information about, see Optimize IOPS, throughput, and
latency for your applications.

There are specific Azure Monitor metrics that are invaluable for discovering capping at
the VM and disk level as well as the consumption and the health of the AzureBlob cache.
To identify key counters to add to your monitoring solution and Azure portal dashboard,
see Storage utilization metrics.

7 Note

Azure Monitor doesn't currently offer disk-level metrics for the ephemeral temp
drive (D:\) . VM Cached IOPS Consumed Percentage and VM Cached Bandwidth
Consumed Percentage will reflect IOPS and throughput from both the ephemeral
temp drive (D:\) and host caching together.

Next steps
To learn more, see the other articles in this best practices series:

Quick checklist
VM size

Security

HADR settings

Collect baseline

For security best practices, see Security considerations for SQL Server on Azure
Virtual Machines.

For detailed testing of SQL Server performance on Azure VMs with TPC-E and
TPC_C benchmarks, refer to the blog Optimize OLTP performance .

Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see
the Frequently Asked Questions.
Security considerations for SQL Server
on Azure Virtual Machines
Article • 03/29/2023

Applies to:
SQL Server on Azure VM

This article includes overall security guidelines that help establish secure access to SQL
Server instances in an Azure virtual machine (VM).

Azure complies with several industry regulations and standards that can enable you to
build a compliant solution with SQL Server running in a virtual machine. For information
about regulatory compliance with Azure, see Azure Trust Center .

First review the security best practices for SQL Server and Azure VMs and then review
this article for the best practices that apply to SQL Server on Azure VMs specifically.

To learn more about SQL Server VM best practices, see the other articles in this series:
Checklist, VM size, HADR configuration, and Collect baseline.

Checklist
Review the following checklist in this section for a brief overview of the security best
practices that the rest of the article covers in greater detail.

SQL Server features and capabilities provide a method of security at the data level and is
how you achieve defense-in-depth at the infrastructure level for cloud-based and
hybrid solutions. In addition, with Azure security measures, it is possible to encrypt your
sensitive data, protect virtual machines from viruses and malware, secure network traffic,
identify and detect threats, meet compliance requirements, and provides a single
method for administration and reporting for any security need in the hybrid cloud.

Use Microsoft Defender for Cloud to evaluate and take action to improve the
security posture of your data environment. Capabilities such as Azure Advanced
Threat Protection (ATP) can be leveraged across your hybrid workloads to improve
security evaluation and give the ability to react to risks. Registering your SQL
Server VM with the SQL IaaS Agent extension surfaces Microsoft Defender for
Cloud assessments within the SQL virtual machine resource of the Azure portal.
Use Microsoft Defender for SQL to discover and mitigate potential database
vulnerabilities, as well as detect anomalous activities that could indicate a threat to
your SQL Server instance and database layer.
Vulnerability Assessment is a part of Microsoft Defender for SQL that can discover
and help remediate potential risks to your SQL Server environment. It provides
visibility into your security state, and includes actionable steps to resolve security
issues.
Use Azure confidential VMs to reinforce protection of your data in-use, and data-
at-rest against host operator access. Azure confidential VMs allow you to
confidently store your sensitive data in the cloud and meet strict compliance
requirements.
If you're on SQL Server 2022, consider using Azure Active Directory authentication
to connect to your instance of SQL Server.
Azure Advisor analyzes your resource configuration and usage telemetry and then
recommends solutions that can help you improve the cost effectiveness,
performance, high availability, and security of your Azure resources. Leverage
Azure Advisor at the virtual machine, resource group, or subscription level to help
identify and apply best practices to optimize your Azure deployments.
Use Azure Disk Encryption when your compliance and security needs require you
to encrypt the data end-to-end using your encryption keys, including encryption of
the ephemeral (locally attached temporary) disk.
Managed Disks are encrypted at rest by default using Azure Storage Service
Encryption, where the encryption keys are Microsoft-managed keys stored in
Azure.
For a comparison of the managed disk encryption options review the managed
disk encryption comparison chart
Management ports should be closed on your virtual machines - Open remote
management ports expose your VM to a high level of risk from internet-based
attacks. These attacks attempt to brute force credentials to gain admin access to
the machine.
Turn on Just-in-time (JIT) access for Azure virtual machines
Use Azure Bastion over Remote Desktop Protocol (RDP).
Lock down ports and only allow the necessary application traffic using Azure
Firewall which is a managed Firewall as a Service (FaaS) that grants/ denies server
access based on the originating IP address.
Use Network Security Groups (NSGs) to filter network traffic to, and from, Azure
resources on Azure Virtual Networks
Leverage Application Security Groups to group servers together with similar port
filtering requirements, with similar functions, such as web servers and database
servers.
For web and application servers leverage Azure Distributed Denial of Service
(DDoS) protection. DDoS attacks are designed to overwhelm and exhaust network
resources, making apps slow or unresponsive. It is common for DDos attacks to
target user interfaces. Azure DDoS protection sanitizes unwanted network traffic,
before it impacts service availability
Use VM extensions to help address anti-malware, desired state, threat detection,
prevention, and remediation to address threats at the operating system, machine,
and network levels:
Guest Configuration extension performs audit and configuration operations
inside virtual machines.
Network Watcher Agent virtual machine extension for Windows and Linux
monitors network performance, diagnostic, and analytics service that allows
monitoring of Azure networks.
Microsoft Antimalware Extension for Windows to help identify and remove
viruses, spyware, and other malicious software, with configurable alerts.
Evaluate 3rd party extensions such as Symantec Endpoint Protection for
Windows VM (/azure/virtual-machines/extensions/symantec)
Use Azure Policy to create business rules that can be applied to your environment.
Azure Policies evaluate Azure resources by comparing the properties of those
resources against rules defined in JSON format.
Azure Blueprints enables cloud architects and central information technology
groups to define a repeatable set of Azure resources that implements and adheres
to an organization's standards, patterns, and requirements. Azure Blueprints are
different than Azure Policies.

For more information about security best practices, see SQL Server security best
practices and Securing SQL Server.

Microsoft Defender for SQL on machines


Microsoft Defender for Cloud is a unified security management system that is designed
to evaluate and provide opportunities to improve the security posture of your data
environment. Microsoft Defender offers Microsoft Defender for SQL on machines
protection for SQL Server on Azure VMs. Use Microsoft Defender for SQL to discover
and mitigate potential database vulnerabilities, and detect anomalous activities that may
indicate a threat to your SQL Server instance and database layer.

Microsoft Defender for SQL offers the following benefits:

Vulnerability Assessments can discover and help remediate potential risks to your
SQL Server environment. It provides visibility into your security state, and it
includes actionable steps to resolve security issues.
Use security score in Microsoft Defender for Cloud.
Review the list of the compute and data recommendations currently available, for
further details.
Registering your SQL Server VM with the SQL Server IaaS Agent Extension surfaces
Microsoft Defender for SQL recommendations to the SQL virtual machines
resource in the Azure portal.

Portal management
After you've registered your SQL Server VM with the SQL IaaS Agent extension, you can
configure a number of security settings using the SQL virtual machines resource in the
Azure portal, such as enabling Azure Key Vault integration, or SQL authentication.

Additionally, after you've enabled Microsoft Defender for SQL on machines you can view
Defender for Cloud features directly within the SQL virtual machines resource in the
Azure portal, such as vulnerability assessments and security alerts.

See manage SQL Server VM in the portal to learn more.

Confidential VMs
Azure confidential VMs provide a strong, hardware-enforced boundary that hardens the
protection of the guest OS against host operator access. Choosing a confidential VM
size for your SQL Server on Azure VM provides an extra layer of protection, enabling you
to confidently store your sensitive data in the cloud and meet strict compliance
requirements.

Azure confidential VMs leverage AMD processors with SEV-SNP technology that encrypt
the memory of the VM using keys generated by the processor. This helps protect data
while it's in use (the data that is processed inside the memory of the SQL Server process)
from unauthorized access from the host OS. The OS disk of a confidential VM can also
be encrypted with keys bound to the Trusted Platform Module (TPM) chip of the virtual
machine, reinforcing protection for data-at-rest.

For detailed deployment steps, see the Quickstart: Deploy SQL Server to a confidential
VM.

Recommendations for disk encryption are different for confidential VMs than for the
other VM sizes. See disk encryption to learn more.

Azure AD authentication
Starting with SQL Server 2022, you can connect to SQL Server using one of the following
Azure Active Directory (Azure AD) identity authentication methods:

Azure AD Password
Azure AD Integrated
Azure AD Universal with Multi-Factor Authentication
Azure Active Directory access token

To get started, review Configure Azure AD authentication for your SQL Server VM.

Azure Advisor
Azure Advisor is a personalized cloud consultant that helps you follow best practices to
optimize your Azure deployments. Azure Advisor analyzes your resource configuration
and usage telemetry and then recommends solutions that can help you improve the
cost effectiveness, performance, high availability, and security of your Azure resources.
Azure Advisor can evaluate at the virtual machine, resource group, or subscription level.

Azure Key Vault integration


There are multiple SQL Server encryption features, such as transparent data encryption
(TDE), column level encryption (CLE), and backup encryption. These forms of encryption
require you to manage and store the cryptographic keys you use for encryption. The
Azure Key Vault service is designed to improve the security and management of these
keys in a secure and highly available location. The SQL Server Connector allows SQL
Server to use these keys from Azure Key Vault.

Consider the following:

Azure Key Vault stores application secrets in a centralized cloud location to


securely control access permissions, and separate access logging.
When bringing your own keys to Azure it is recommended to store secrets and
certificates in the Azure Key Vault.
Azure Disk Encryption uses Azure Key Vault to control and manage disk encryption
keys and secrets.

Access control
When you create a SQL Server virtual machine with an Azure gallery image, the SQL
Server Connectivity option gives you the choice of Local (inside VM), Private (within
Virtual Network), or Public (Internet).
For the best security, choose the most restrictive option for your scenario. For example,
if you are running an application that accesses SQL Server on the same VM, then Local is
the most secure choice. If you are running an Azure application that requires access to
the SQL Server, then Private secures communication to SQL Server only within the
specified Azure virtual network. If you require Public (internet) access to the SQL Server
VM, then make sure to follow other best practices in this topic to reduce your attack
surface area.

The selected options in the portal use inbound security rules on the VM's network
security group (NSG) to allow or deny network traffic to your virtual machine. You can
modify or create new inbound NSG rules to allow traffic to the SQL Server port (default
1433). You can also specify specific IP addresses that are allowed to communicate over
this port.

In addition to NSG rules to restrict network traffic, you can also use the Windows
Firewall on the virtual machine.

If you are using endpoints with the classic deployment model, remove any endpoints on
the virtual machine if you do not use them. For instructions on using ACLs with
endpoints, see Manage the ACL on an endpoint. This is not necessary for VMs that use
the Azure Resource Manager.
Consider enabling encrypted connections for the instance of the SQL Server Database
Engine in your Azure virtual machine. Configure SQL server instance with a signed
certificate. For more information, see Enable Encrypted Connections to the Database
Engine and Connection String Syntax.

Consider the following when securing the network connectivity or perimeter:

Azure Firewall - A stateful, managed, Firewall as a Service (FaaS) that grants/ denies
server access based on originating IP address, to protect network resources.
Azure Distributed Denial of Service (DDoS) protection - DDoS attacks overwhelm
and exhaust network resources, making apps slow or unresponsive. Azure DDoS
protection sanitizes unwanted network traffic before it impacts service availability.
Network Security Groups (NSGs) - Filters network traffic to, and from, Azure
resources on Azure Virtual Networks
Application Security Groups - Provides for the grouping of servers with similar port
filtering requirements, and group together servers with similar functions, such as
web servers.

Disk encryption
This section provides guidance for disk encryption, but the recommendations vary
depending on if you're deploying a conventional SQL Server on Azure VM, or SQL
Server to an Azure confidential VM.

Conventional VMs
Managed disks deployed to VMs that are not Azure confidential VMs use server-side
encryption, and Azure Disk Encryption. Server-side encryption provides encryption-at-
rest and safeguards your data to meet your organizational security and compliance
commitments. Azure Disk Encryption uses either BitLocker or DM-Crypt technology, and
integrates with Azure Key Vault to encrypt both the OS and data disks.

Consider the following:

Azure Disk Encryption - Encrypts virtual machine disks using Azure Disk Encryption
both for Windows and Linux virtual machines.
When your compliance and security requirements require you to encrypt the
data end-to-end using your encryption keys, including encryption of the
ephemeral (locally attached temporary) disk, use
Azure disk encryption.
Azure Disk Encryption (ADE) leverages the industry-standard BitLocker feature
of Windows and the DM-Crypt feature of Linux to
provide OS and data disk
encryption.
Managed Disk Encryption
Managed Disks are encrypted at rest by default using Azure Storage Service
Encryption where the encryption keys are Microsoft managed keys stored in
Azure.
Data in Azure managed disks is encrypted transparently using 256-bit AES
encryption, one of the strongest block ciphers available, and is FIPS 140-2
compliant.
For a comparison of the managed disk encryption options review the managed
disk encryption comparison chart.

Azure confidential VMs


If you are using an Azure confidential VM, consider the following recommendations to
maximize security benefits:

Configure confidential OS disk encryption, which binds the OS disk encryption keys
to the Trusted Platform Module (TPM) chip of the virtual machine, and makes the
protected disk content accessible only to the VM.
Encrypt your data disks (any disks containing database files, log files, or backup
files) with BitLocker, and enable automatic unlocking - review manage-bde
autounlock or EnableBitLockerAutoUnlock for more information. Automatic
unlocking ensures the encryption keys are stored on the OS disk. In conjunction
with confidential OS disk encryption, this protects the data-at-rest stored to the
VM disks from unauthorized host access.

Trusted Launch
When you deploy a generation 2 virtual machine, you have the option to enable trusted
launch, which protects against advanced and persistent attack techniques.

With trusted launch, you can:

Securely deploy virtual machines with verified boot loaders, OS kernels, and
drivers.
Securely protect keys, certificates, and secrets in the virtual machines.
Gain insights and confidence of the entire boot chain's integrity.
Ensure workloads are trusted and verifiable.

The following features are currently unsupported when you enable trusted launch for
your SQL Server on Azure VMs:

Azure Site Recovery


Ultra disks
Managed images
Nested virtualization

Manage accounts
You don't want attackers to easily guess account names or passwords. Use the following
tips to help:

Create a unique local administrator account that is not named Administrator.

Use complex strong passwords for all your accounts. For more information about
how to create a strong password, see Create a strong password article.

By default, Azure selects Windows Authentication during SQL Server virtual


machine setup. Therefore, the SA login is disabled and a password is assigned by
setup. We recommend that the SA login should not be used or enabled. If you
must have a SQL login, use one of the following strategies:

Create a SQL account with a unique name that has sysadmin membership. You
can do this from the portal by enabling SQL Authentication during
provisioning.

 Tip

If you do not enable SQL Authentication during provisioning, you must


manually change the authentication mode to SQL Server and Windows
Authentication Mode. For more information, see Change Server
Authentication Mode.

If you must use the SA login, enable the login after provisioning and assign a
new strong password.

7 Note

Connecting to a SQL Server instance that's running on an Azure virtual machine


(VM) is not supported using Azure Active Directory or Azure Active Directory
Domain Services. Use an Active Directory domain account instead.

Auditing and reporting


Auditing with Log Analytics documents events and writes to an audit log in a secure
Azure Blob Storage account. Log Analytics can be used to decipher the details of the
audit logs. Auditing gives you the ability to save data to a separate storage account and
create an audit trail of all events you select. You can also leverage Power BI against the
audit log for quick analytics of and insights about your data, as well as to provide a view
for regulatory compliance. To learn more about auditing at the VM and Azure levels, see
Azure security logging and auditing.

Virtual Machine level access


Close management ports on your machine - Open remote management ports are
exposing your VM to a high level of risk from internet-based attacks. These attacks
attempt to brute force credentials to gain admin access to the machine.

Turn on Just-in-time (JIT) access for Azure virtual machines.


Leverage Azure Bastion over Remote Desktop Protocol (RDP).

Virtual Machine extensions


Azure Virtual Machine extensions are trusted Microsoft or 3rd party extensions that can
help address specific needs and risks such as antivirus, malware, threat protection, and
more.

Guest Configuration extension


To ensure secure configurations of in-guest settings of your machine, install the
Guest Configuration extension.
In-guest settings include the configuration of the operating system, application
configuration or presence, and environment settings.
Once installed, in-guest policies will be available such as 'Windows Exploit
guard should be enabled'.
Network traffic data collection agent
Microsoft Defender for Cloud uses the Microsoft Dependency agent to collect
network traffic data from your Azure virtual machines.
This agent enables advanced network protection features such as traffic
visualization on the network map, network hardening recommendations, and
specific network threats.
Evaluate extensions from Microsoft and 3rd parties to address anti-malware,
desired state, threat detection, prevention, and remediation to address threats at
the operating system, machine, and network levels.
Next steps
Review the security best practices for SQL Server and Azure VMs and then review this
article for the best practices that apply to SQL Server on Azure VMs specifically.

For other topics related to running SQL Server in Azure VMs, see SQL Server on Azure
Virtual Machines overview. If you have questions about SQL Server virtual machines, see
the Frequently Asked Questions.

To learn more, see the other articles in this best practices series:

Quick checklist
VM size
Storage
HADR settings
Collect baseline
HADR configuration best practices (SQL
Server on Azure VMs)
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

A Windows Server Failover Cluster is used for high availability and disaster recovery
(HADR) with SQL Server on Azure Virtual Machines (VMs).

This article provides cluster configuration best practices for both failover cluster
instances (FCIs) and availability groups when you use them with SQL Server on Azure
VMs.

To learn more, see the other articles in this series: Checklist, VM size, Storage, Security,
HADR configuration, Collect baseline.

Checklist
Review the following checklist for a brief overview of the HADR best practices that the
rest of the article covers in greater detail.

High availability and disaster recovery (HADR) features, such as the Always On
availability group and the failover cluster instance rely on underlying Windows Server
Failover Cluster technology. Review the best practices for modifying your HADR settings
to better support the cloud environment.

For your Windows cluster, consider these best practices:

Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to
route traffic to your HADR solution.
Change the cluster to less aggressive parameters to avoid unexpected outages
from transient network failures or Azure platform maintenance. To learn more, see
heartbeat and threshold settings. For Windows Server 2012 and later, use the
following recommended values:
SameSubnetDelay: 1 second
SameSubnetThreshold: 40 heartbeats
CrossSubnetDelay: 1 second
CrossSubnetThreshold: 40 heartbeats
Place your VMs in an availability set or different availability zones. To learn more,
see VM availability settings.
Use a single NIC per cluster node.
Configure cluster quorum voting to use 3 or more odd number of votes. Don't
assign votes to DR regions.
Carefully monitor resource limits to avoid unexpected restarts or failovers due to
resource constraints.
Ensure your OS, drivers, and SQL Server are at the latest builds.
Optimize performance for SQL Server on Azure VMs. Review the other sections
in this article to learn more.
Reduce or spread out workload to avoid resource limits.
Move to a VM or disk that his higher limits to avoid constraints.

For your SQL Server availability group or failover cluster instance, consider these best
practices:

If you're experiencing frequent unexpected failures, follow the performance best


practices outlined in the rest of this article.
If optimizing SQL Server VM performance doesn't resolve your unexpected
failovers, consider relaxing the monitoring for the availability group or failover
cluster instance. However, doing so may not address the underlying source of the
issue and could mask symptoms by reducing the likelihood of failure. You may still
need to investigate and address the underlying root cause. For Windows Server
2012 or higher, use the following recommended values:
Lease timeout: Use this equation to calculate the maximum lease time-out
value:

Lease timeout < (2 * SameSubnetThreshold * SameSubnetDelay) .

Start with 40 seconds. If you're using the relaxed SameSubnetThreshold and


SameSubnetDelay values recommended previously, don't exceed 80 seconds for

the lease timeout value.


Max failures in a specified period: Set this value to 6.
When using the virtual network name (VNN) and an Azure Load Balancer to
connect to your HADR solution, specify MultiSubnetFailover = true in the
connection string, even if your cluster only spans one subnet.
If the client doesn't support MultiSubnetFailover = True you may need to set
RegisterAllProvidersIP = 0 and HostRecordTTL = 300 to cache client

credentials for shorter durations. However, doing so may cause additional


queries to the DNS server.

To connect to your HADR solution using the distributed network name (DNN),
consider the following:
You must use a client driver that supports MultiSubnetFailover = True , and this
parameter must be in the connection string.
Use a unique DNN port in the connection string when connecting to the DNN
listener for an availability group.
Use a database mirroring connection string for a basic availability group to bypass
the need for a load balancer or DNN.
Validate the sector size of your VHDs before deploying your high availability
solution to avoid having misaligned I/Os. See KB3009974 to learn more.
If the SQL Server database engine, Always On availability group listener, or failover
cluster instance health probe are configured to use a port between 49,152 and
65,536 (the default dynamic port range for TCP/IP), add an exclusion for each port.
Doing so prevents other systems from being dynamically assigned the same port.
The following example creates an exclusion for port 59999:

netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1


store=persistent

To compare the HADR checklist with the other best practices, see the comprehensive
Performance best practices checklist.

VM availability settings
To reduce the impact of downtime, consider the following VM best availability settings:

Use proximity placement groups together with accelerated networking for lowest
latency.
Place virtual machine cluster nodes in separate availability zones to protect from
datacenter-level failures or in a single availability set for lower-latency redundancy
within the same datacenter.
Use premium-managed OS and data disks for VMs in an availability set.
Configure each application tier into separate availability sets.

Quorum
Although a two-node cluster will function without a quorum resource, customers are
strictly required to use a quorum resource to have production support. Cluster
validation won't pass any cluster without a quorum resource.

Technically, a three-node cluster can survive a single node loss (down to two nodes)
without a quorum resource, but after the cluster is down to two nodes, if there is
another node loss or communication failure, then there is a risk that the clustered
resources will go offline to prevent a split-brain scenario. Configuring a quorum
resource will allow the cluster to continue online with only one node online.
The disk witness is the most resilient quorum option, but to use a disk witness on a SQL
Server on Azure VM, you must use an Azure Shared Disk which imposes some
limitations to the high availability solution. As such, use a disk witness when you're
configuring your failover cluster instance with Azure Shared Disks, otherwise use a cloud
witness whenever possible.

The following table lists the quorum options available for SQL Server on Azure VMs:

Cloud witness Disk witness File share witness

Supported OS Windows Server 2016+ All All

The cloud witness is ideal for deployments in multiple sites, multiple zones, and
multiple regions. Use a cloud witness whenever possible, unless you're using a
shared-storage cluster solution.
The disk witness is the most resilient quorum option and is preferred for any
cluster that uses Azure Shared Disks (or any shared-disk solution like shared SCSI,
iSCSI, or fiber channel SAN). A Clustered Shared Volume cannot be used as a disk
witness.
The fileshare witness is suitable for when the disk witness and cloud witness are
unavailable options.

To get started, see Configure cluster quorum.

Quorum Voting
It's possible to change the quorum vote of a node participating in a Windows Server
Failover Cluster.

When modifying the node vote settings, follow these guidelines:

Qurom voting guidelines

Start with each node having no vote by default. Each node should only have a vote with explicit
justification.

Enable votes for cluster nodes that host the primary replica of an availability group, or the
preferred owners of a failover cluster instance.

Enable votes for automatic failover owners. Each node that may host a primary replica or FCI as a
result of an automatic failover should have a vote.

If an availability group has more than one secondary replica, only enable votes for the replicas
that have automatic failover.
Qurom voting guidelines

Disable votes for nodes that are in secondary disaster recovery sites. Nodes in secondary sites
should not contribute to the decision of taking a cluster offline if there's nothing wrong with the
primary site.

Have an odd number of votes, with three quorum votes minimum. Add a quorum witness for an
additional vote if necessary in a two-node cluster.

Reassess vote assignments post-failover. You don't want to fail over into a cluster configuration
that doesn't support a healthy quorum.

Connectivity
To match the on-premises experience for connecting to your availability group listener
or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the
same virtual network. Having multiple subnets negates the need for the extra
dependency on an Azure Load Balancer, or a distributed network name to route your
traffic to your listener.

To simplify your HADR solution, deploy your SQL Server VMs to multiple subnets
whenever possible. To learn more, see Multi-subnet AG, and Multi-subnet FCI.

If your SQL Server VMs are in a single subnet, it's possible to configure either a virtual
network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN)
for both failover cluster instances and availability group listeners.

The distributed network name is the recommended connectivity option, when available:

The end-to-end solution is more robust since you no longer have to maintain the
load balancer resource.
Eliminating the load balancer probes minimizes failover duration.
The DNN simplifies provisioning and management of the failover cluster instance
or availability group listener with SQL Server on Azure VMs.

Consider the following limitations:

The client driver must support the MultiSubnetFailover=True parameter.


The DNN feature is available starting with SQL Server 2016 SP3 , SQL Server 2017
CU25 , and SQL Server 2019 CU8 on Windows Server 2016 and later.

To learn more, see the Windows Server Failover Cluster overview.

To configure connectivity, see the following articles:


Availability group: Configure DNN, Configure VNN
Failover cluster instance: Configure DNN, Configure VNN.

Most SQL Server features work transparently with FCI and availability groups when using
the DNN, but there are certain features that may require special consideration. See FCI
and DNN interoperability and AG and DNN interoperability to learn more.

 Tip

Set the MultiSubnetFailover parameter = true in the connection string even for
HADR solutions that span a single subnet to support future spanning of subnets
without needing to update connection strings.

Heartbeat and threshold


Change the cluster heartbeat and threshold settings to relaxed settings. The default
heartbeat and threshold cluster settings are designed for highly tuned on-premises
networks and do not consider the possibility of increased latency in a cloud
environment. The heartbeat network is maintained with UDP 3343, which is traditionally
far less reliable than TCP and more prone to incomplete conversations.

Therefore, when running cluster nodes for SQL Server on Azure VM high availability
solutions, change the cluster settings to a more relaxed monitoring state to avoid
transient failures due to the increased possibility of network latency or failure, Azure
maintenance, or hitting resource bottlenecks.

The delay and threshold settings have a cumulative effect to total health detection. For
example, setting CrossSubnetDelay to send a heartbeat every 2 seconds and setting the
CrossSubnetThreshold to 10 missed heartbeats before taking recovery means the cluster
can have a total network tolerance of 20 seconds before recovery action is taken. In
general, continuing to send frequent heartbeats but having greater thresholds is
preferred.

To ensure recovery during legitimate outages while providing greater tolerance for
transient issues, relax your delay and threshold settings to the recommended values
detailed in the following table:

Setting Windows Server 2012 or later Windows Server 2008R2

SameSubnetDelay 1 second 2 second

SameSubnetThreshold 40 heartbeats 10 heartbeats (max)


Setting Windows Server 2012 or later Windows Server 2008R2

CrossSubnetDelay 1 second 2 second

CrossSubnetThreshold 40 heartbeats 20 heartbeats (max)

Use PowerShell to change your cluster parameters:

Windows Server 2012-2019

PowerShell

(get-cluster).SameSubnetThreshold = 40

(get-cluster).CrossSubnetThreshold = 40

Use PowerShell to verify your changes:

PowerShell

get-cluster | fl *subnet*

Consider the following:

This change is immediate, restarting the cluster or any resources is not required.
Same subnet values should not be greater than cross subnet values.
SameSubnetThreshold <= CrossSubnetThreshold
SameSubnetDelay <= CrossSubnetDelay

Choose relaxed values based on how much down time is tolerable and how long before
a corrective action should occur depending on your application, business needs, and
your environment. If you're not able to exceed the default Windows Server 2019 values,
then at least try to match them, if possible:

For reference, the following table details the default values:

Setting Windows Server Windows Server Windows Server 2008 -


2019 2016 2012 R2

SameSubnetDelay 1 second 1 second 1 second

SameSubnetThreshold 20 heartbeats 10 heartbeats 5 heartbeats

CrossSubnetDelay 1 second 1 second 1 second

CrossSubnetThreshold 20 heartbeats 10 heartbeats 5 heartbeats


To learn more, see Tuning Failover Cluster Network Thresholds.

Relaxed monitoring
If tuning your cluster heartbeat and threshold settings as recommended is insufficient
tolerance and you're still seeing failures due to transient issues rather than true outages,
you can configure your AG or FCI monitoring to be more relaxed. In some scenarios, it
may be beneficial to temporarily relax the monitoring for a period of time given the
level of activity. For example, you may want to relax the monitoring when you're doing
IO intensive workloads such as database backups, index maintenance, DBCC CHECKDB,
etc. Once the activity is complete, set your monitoring to less relaxed values.

2 Warning

Changing these settings may mask an underlying problem, and should be used as a
temporary solution to reduce, rather than eliminate, the likelihood of failure.
Underlying issues should still be investigated and addressed.

Start by increasing the following parameters from their default values for relaxed
monitoring, and adjust as necessary:

Parameter Default Relaxed Description


value Value

Healthcheck 30000 60000 Determines health of the primary replica or node. The cluster
timeout resource DLL sp_server_diagnostics returns results at an
interval that equals 1/3 of the health-check timeout
threshold. If sp_server_diagnostics is slow or is not returning
information, the resource DLL will wait for the full interval of
the health-check timeout threshold before determining that
the resource is unresponsive, and initiating an automatic
failover, if configured to do so.

Failure- 3 2 Conditions that trigger an automatic failover. There are five


Condition failure-condition levels, which range from the least restrictive
Level (level one) to the most restrictive (level five)

Use Transact-SQL (T-SQL) to modify the health check and failure conditions for both AGs
and FCIs.

For availability groups:

SQL
ALTER AVAILABILITY GROUP AG1 SET (HEALTH_CHECK_TIMEOUT =60000);

ALTER AVAILABILITY GROUP AG1 SET (FAILURE_CONDITION_LEVEL = 2);

For failover cluster instances:

SQL

ALTER SERVER CONFIGURATION SET FAILOVER CLUSTER PROPERTY HealthCheckTimeout


= 60000;

ALTER SERVER CONFIGURATION SET FAILOVER CLUSTER PROPERTY


FailureConditionLevel = 2;

Specific to availability groups, start with the following recommended parameters, and
adjust as necessary:

Parameter Default Relaxed Description


value Value

Lease 20000 40000 Prevents split-brain.


timeout

Session 10000 20000 Checks communication issues between replicas. The session-
timeout timeout period is a replica property that controls how long (in
seconds) that an availability replica waits for a ping response
from a connected replica before considering the connection to
have failed. By default, a replica waits 10 seconds for a ping
response. This replica property applies to only the connection
between a given secondary replica and the primary replica of
the availability group.

Max 2 6 Used to avoid indefinite movement of a clustered resource


failures in within multiple node failures. Too low of a value can lead to
specified the availability group being in a failed state. Increase the value
period to prevent short disruptions from performance issues as too
low a value can lead to the AG being in a failed state.

Before making any changes, consider the following:

Do not lower any timeout values below their default values.


Use this equation to calculate the maximum lease time out value:

Lease timeout < (2 * SameSubnetThreshold * SameSubnetDelay) .

Start with 40 seconds. If you're using the relaxed SameSubnetThreshold and


SameSubnetDelay values recommended previously, do not exceed 80 seconds for
the lease timeout value.
For synchronous-commit replicas, changing session-timeout to a high value can
increase HADR_sync_commit waits.

Lease timeout

Use the Failover Cluster Manager to modify the lease timeout settings for your
availability group. See the SQL Server availability group lease health check
documentation for detailed steps.

Session timeout

Use Transact-SQL (T-SQL) to modify the session timeout for an availability group:

SQL

ALTER AVAILABILITY GROUP AG1

MODIFY REPLICA ON 'INSTANCE01' WITH (SESSION_TIMEOUT = 20);

Max failures in specified period

Use the Failover Cluster Manager to modify the Max failures in specified period value:

1. Select Roles in the navigation pane.


2. Under Roles, right-click the clustered resource and choose Properties.
3. Select the Failover tab, and increase the Max failures in specified period value as
desired.

Resource limits
VM or disk limits could result in a resource bottleneck that impacts the health of the
cluster, and impedes the health check. If you're experiencing issues with resource limits,
consider the following:

Ensure your OS, drivers, and SQL Server are at the latest builds.
Optimize SQL Server on Azure VM environment as described in the performance
guidelines for SQL Server on Azure Virtual Machines
Reduce or spread out the workload to reduce utilization without exceeding
resource limits
Tune the SQL Server workload if there is any opportunity, such as
Add/optimize indexes
Update statistics if needed and if possible, with Full scan
Use features like resource governor (starting with SQL Server 2014, enterprise
only) to limit resource utilization during specific workloads, such as backups or
index maintenance.
Move to a VM or disk that has higher limits to meet or exceed the demands of
your workload.

Networking
Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to route
traffic to your HADR solution.

Use a single NIC per server (cluster node). Azure networking has physical redundancy,
which makes additional NICs unnecessary on an Azure virtual machine guest cluster. The
cluster validation report will warn you that the nodes are reachable only on a single
network. You can ignore this warning on Azure virtual machine guest failover clusters.

Bandwidth limits for a particular VM are shared across NICs and adding an additional
NIC does not improve availability group performance for SQL Server on Azure VMs. As
such, there is no need to add a second NIC.

The non-RFC-compliant DHCP service in Azure can cause the creation of certain failover
cluster configurations to fail. This failure happens because the cluster network name is
assigned a duplicate IP address, such as the same IP address as one of the cluster nodes.
This is an issue when you use availability groups, which depend on the Windows failover
cluster feature.

Consider the scenario when a two-node cluster is created and brought online:

1. The cluster comes online, and then NODE1 requests a dynamically assigned IP
address for the cluster network name.
2. The DHCP service doesn't give any IP address other than NODE1's own IP address,
because the DHCP service recognizes that the request comes from NODE1 itself.
3. Windows detects that a duplicate address is assigned both to NODE1 and to the
failover cluster's network name, and the default cluster group fails to come online.
4. The default cluster group moves to NODE2. NODE2 treats NODE1's IP address as
the cluster IP address and brings the default cluster group online.
5. When NODE2 tries to establish connectivity with NODE1, packets directed at
NODE1 never leave NODE2 because it resolves NODE1's IP address to itself.
NODE2 can't establish connectivity with NODE1, and then loses quorum and shuts
down the cluster.
6. NODE1 can send packets to NODE2, but NODE2 can't reply. NODE1 loses quorum
and shuts down the cluster.
You can avoid this scenario by assigning an unused static IP address to the cluster
network name in order to bring the cluster network name online and add the IP address
to Azure Load Balancer.

If the SQL Server database engine, Always On availability group listener, failover cluster
instance health probe, database mirroring endpoint, cluster core IP resource, or any
other SQL resource is configured to use a port between 49,152 and 65,536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Doing so will prevent
other system processes from being dynamically assigned the same port. The following
example creates an exclusion for port 59999:

netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1

store=persistent

It is important to configure the port exclusion when the port is not in use, otherwise the
command will fail with a message like "The process cannot access the file because it is
being used by another process."

To confirm that the exclusions have been configured correctly, use the following
command: netsh int ipv4 show excludedportrange tcp .

Setting this exclusion for the availability group role IP probe port should prevent events
such as Event ID: 1069 with status 10048. This event can be seen in the Windows
Failover cluster events with the following message:

Cluster resource '<IP name in AG role>' of type 'IP Address' in cluster role
'<AG Name>' failed.

An Event ID: 1069 with status 10048 can be identified from cluster logs with
events like:

Resource IP Address 10.0.1.0 called SetResourceStatusEx: checkpoint 5. Old


state OnlinePending, new state OnlinePending, AppSpErrorCode 0, Flags 0,
nores=false

IP Address <IP Address 10.0.1.0>: IpaOnlineThread: **Listening on probe port


59999** failed with status **10048**

Status [**10048**](/windows/win32/winsock/windows-sockets-error-codes-2)
refers to: **This error occurs** if an application attempts to bind a socket
to an **IP address/port that has already been used** for an existing socket.

This can be caused by an internal process taking the same port defined as probe port.
Remember that probe port is used to check the status of a backend pool instance from
the Azure Load Balancer.
If the health probe fails to get a response from a backend
instance, then no new connections will be sent to that backend instance until the
health probe succeeds again.
Known issues
Review the resolutions for some commonly known issues and errors.

Resource contention (IO in particular) causes failover


Exhausting I/O or CPU capacity for the VM can cause your availability group to fail over.
Identifying the contention that happens right before the failover is the most reliable way
to identify what is causing automatic failover. Monitor Azure Virtual Machines to look at
the Storage IO Utilization metrics to understand VM or disk level latency.

Follow these steps to review the Azure VM Overall IO Exhaustion event:

1. Navigate to your Virtual Machine in the Azure Portal - not the SQL virtual
machines.

2. Select Metrics under Monitoring to open the Metrics page.

3. Select Local time to specify the time range you're interested in, and the time zone,
either local to the VM, or UTC/GMT.

4. Select Add metric to add the following two metrics to see the graph:

VM Cached Bandwidth Consumed Percentage


VM Uncached Bandwidth Consumed Percentage
Azure VM HostEvents causes failover
It's possible that an Azure VM HostEvent causes your availability group to fail over. If
you believe an Azure VM HostEvent caused a failover, you can check the Azure Monitor
Activity log, and the Azure VM Resource Health overview.

The Azure Monitor activity log is a platform log in Azure that provides insight into
subscription-level events. The activity log includes information like when a resource is
modified or a virtual machine is started. You can view the activity log in the Azure portal
or retrieve entries with PowerShell and the Azure CLI.

To check the Azure Monitor activity log, follow these steps:

1. Navigate to your Virtual Machine in Azure portal

2. Select Activity Log on the Virtual Machine blade

3. Select Timespan and then choose the time frame when your availability group
failed over. Select Apply.

If Azure has further information about the root cause of a platform-initiated


unavailability, that information may be posted on the Azure VM - Resource Health
overview page up to 72 hours after the initial unavailability. This information is only
available for virtual machines at this time.

1. Navigate to your Virtual Machine in Azure portal


2. Select Resource Health under the Health blade.
You can also configure alerts based on health events from this page.

Cluster node removed from membership


If the Windows Cluster heartbeat and threshold settings are too aggressive for your
environment, you may see following message in the system event log frequently.

Error 1135

Cluster node 'Node1' was removed from the active failover cluster
membership.

The Cluster service on this node may have stopped. This could also be due to
the node having

lost communication with other active nodes in the failover cluster. Run the
Validate a

Configuration Wizard to check your network configuration. If the condition


persists, check

for hardware or software errors related to the network adapters on this


node. Also check for

failures in any other network components to which the node is connected such
as hubs, switches, or bridges.

For more information, review Troubleshooting cluster issue with Event ID 1135.

Lease has expired / Lease is no longer valid


If monitoring is too aggressive for your environment, you may see frequent availability
group or FCI restarts, failures, or failovers. Additionally for availability groups, you may
see the following messages in the SQL Server error log:

Error 19407: The lease between availability group 'PRODAG' and the Windows
Server Failover Cluster has expired.

A connectivity issue occurred between the instance of SQL Server and the
Windows Server Failover Cluster.
To determine whether the availability group is failing over correctly, check
the corresponding availability group

resource in the Windows Server Failover Cluster

Error 19419: The renewal of the lease between availability group '%.*ls' and
the Windows Server Failover Cluster

failed because the existing lease is no longer valid.

Connection timeout
If the session timeout is too aggressive for your availability group environment, you
may see following messages frequently:

Error 35201: A connection timeout has occurred while attempting to establish


a connection to availability

replica 'replicaname' with ID [availability_group_id]. Either a networking


or firewall issue exists,

or the endpoint address provided for the replica is not the database
mirroring endpoint of the host server instance.

Error 35206

A connection timeout has occurred on a previously established connection to


availability

replica 'replicaname' with ID [availability_group_id]. Either a networking


or a firewall issue

exists, or the availability replica has transitioned to the resolving role.

Group not failing over


If the Maximum Failures in the Specified Period value is too low and you're
experiencing intermittent failures due to transient issues, your availability group could
end in a failed state. Increase this value to tolerate more transient failures.

Not failing over group <Resource name>, failoverCount 3,


failoverThresholdSetting <Number>, computedFailoverThreshold 2.

Event 1196 - Network name resource failed registration of


associated DNS name
Check the NIC settings for each of your cluster nodes to make sure there are no
external DNS records present
Ensure the A record for your cluster exists on your internal DNS servers. If not,
create a new A Record manual in DNS Server for the Cluster Access Control object
and check the Allow any authenticated users to update DNS Records with the
same owner name.
Take the Resource "Cluster Name" with IP Resource offline and fix it.

Event 157 - Disk has been surprised removed.


This can happen if the Storage Spaces property AutomaticClusteringEnabled is set to
True for an AG environment. Change it to False . Also, running a Validation Report with

Storage option can trigger the disk reset or surprise removed event. The storage system
Throttling can also trigger the disk surprise remove event.

Event 1206 - Cluster network name resource cannot be


brought online.
The computer object associated with the resource could not be updated in the domain.
Make sure you have appropriate permissions on domain

Windows Clustering errors


You may encounter issues while setting up a Windows failover cluster or its connectivity
if you don't have Cluster Service Ports open for communication.

If you are on Windows Server 2019 and you do not see a Windows Cluster IP, you have
configured Distributed Network Name, which is only supported on SQL Server 2019. If
you have previous versions of SQL Server, you can remove and Recreate the Cluster
using Network Name.

Review other Windows Failover Clustering Events Errors and their Solutions here

Next steps
To learn more, see:

HADR settings for SQL Server on Azure VMs


Windows Server Failover Cluster with SQL Server on Azure VMs
Always On availability groups with SQL Server on Azure VMs
Windows Server Failover Cluster with SQL Server on Azure VMs
Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
Application patterns and development
strategies for SQL Server on Azure
Virtual Machines
Article • 11/09/2022

Applies to:
SQL Server on Azure VM

7 Note

Azure has two different deployment models for creating and working with
resources: Resource Manager and classic. This article covers using both models,
but Microsoft recommends that most new deployments use the Resource Manager
model.

Summary:
Determining which application pattern or patterns to use for your SQL Server-based
applications in an Azure environment is an important design decision and it requires a
solid understanding of how SQL Server and each infrastructure component of Azure
work together. With SQL Server in Azure Infrastructure Services, you can easily migrate,
maintain, and monitor your existing SQL Server applications built on Windows Server to
virtual machines (VMs) in Azure.

The goal of this article is to provide solution architects and developers a foundation for
good application architecture and design, which they can follow when migrating
existing applications to Azure as well as developing new applications in Azure.

For each application pattern, you will find an on-premises scenario, its respective cloud-
enabled solution, and the related technical recommendations. In addition, the article
discusses Azure-specific development strategies so that you can design your
applications correctly. Due to the many possible application patterns, it's recommended
that architects and developers should choose the most appropriate pattern for their
applications and users.

Technical contributors: Luis Carlos Vargas Herring, Madhan Arumugam Ramakrishnan

Technical reviewers: Corey Sanders, Drew McDaniel, Narayan Annamalai, Nir


Mashkowski, Sanjay Mishra, Silvano Coriani, Stefan Schackow, Tim Hickey, Tim Wieman,
Xin Jin
Introduction
You can develop many types of n-tier applications by separating the components of the
different application layers on different machines as well as in separate components. For
example, you can place the client application and business rules components in one
machine, front-end web tier and data access tier components in another machine, and a
back-end database tier in another machine. This kind of structuring helps isolate each
tier from each other. If you change where data comes from, you don't need to change
the client or web application but only the data access tier components.

A typical n-tier application includes the presentation tier, the business tier, and the data
tier:

Tier Description

Presentation The presentation tier (web tier, front-end tier) is the layer in which users interact
with an application.

Business The business tier (middle tier) is the layer that the presentation tier and the data
tier use to communicate with each other and includes the core functionality of the
system.

Data The data tier is basically the server that stores an application's data (for example, a
server running SQL Server).

Application layers describe the logical groupings of the functionality and components in
an application; whereas tiers describe the physical distribution of the functionality and
components on separate physical servers, computers, networks, or remote locations. The
layers of an application may reside on the same physical computer (the same tier) or
may be distributed over separate computers (n-tier), and the components in each layer
communicate with components in other layers through well-defined interfaces. You can
think of the term tier as referring to physical distribution patterns such as two-tier,
three-tier, and n-tier. A 2-tier application pattern contains two application tiers:
application server and database server. The direct communication happens between the
application server and the database server. The application server contains both web-
tier and business-tier components. In 3-tier application pattern, there are three
application tiers: web server, application server, which contains the business logic tier
and/or business tier data access components, and the database server. The
communication between the web server and the database server happens over the
application server. For detailed information on application layers and tiers, see Microsoft
Application Architecture Guide.

Before you start reading this article, you should have knowledge on the fundamental
concepts of SQL Server and Azure. For information, see SQL Server Books Online, SQL
Server on Azure Virtual Machines and Azure.com .

This article describes several application patterns that can be suitable for your simple
applications as well as the highly complex enterprise applications. Before detailing each
pattern, we recommend that you should familiarize yourself with the available data
storage services in Azure, such as Azure Storage, Azure SQL Database, and SQL Server in
an Azure virtual machine. To make the best design decisions for your applications,
understand when to use which data storage service clearly.

Choose SQL Server on Azure Virtual Machines, when:


You need control on SQL Server and Windows. For example, this might include the
SQL Server version, special hotfixes, performance configuration, etc.

You need a full compatibility with SQL Server and want to move existing
applications to Azure as-is.

You want to leverage the capabilities of the Azure environment but Azure SQL
Database does not support all the features that your application requires. This
could include the following areas:
Database size: At the time this article was updated, SQL Database supports a
database of up to 1 TB of data. If your application requires more than 1 TB of
data and you don't want to implement custom sharding solutions, it's
recommended that you use SQL Server in an Azure virtual machine. For the
latest information, see Scaling Out Azure SQL Database, DTU-Based Purchasing
Model, and vCore-Based Purchasing Model(preview).
HIPAA compliance: Healthcare customers and Independent Software Vendors
(ISVs) might choose SQL Server on Azure Virtual Machines instead of Azure SQL
Database because SQL Server on Azure Virtual Machines is covered by HIPAA
Business Associate Agreement (BAA). For information on compliance, see
Microsoft Azure Trust Center: Compliance .
Instance-level features: At this time, SQL Database doesn't support features
that live outside of the database (such as Linked Servers, Agent jobs, FileStream,
Service Broker, etc.). For more information, see Azure SQL Database Guidelines
and Limitations.

1-tier (simple): Single virtual machine


In this application pattern, you deploy your SQL Server application and database to a
standalone virtual machine in Azure. The same virtual machine contains your client/web
application, business components, data access layer, and the database server. The
presentation, business, and data access code are logically separated but are physically
located in a single-server machine. Most customers start with this application pattern
and then, they scale out by adding more web roles or virtual machines to their system.

This application pattern is useful when:

You want to perform a simple migration to Azure platform to evaluate whether the
platform answers your application's requirements or not.
You want to keep all the application tiers hosted in the same virtual machine in the
same Azure data center to reduce the latency between tiers.
You want to quickly provision development and test environments for short
periods of time.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.

The following diagram demonstrates a simple on-premises scenario and how you can
deploy its cloud enabled solution in a single virtual machine in Azure.

Deploying the business layer (business logic and data access components) on the same
physical tier as the presentation layer can maximize application performance, unless you
must use a separate tier due to scalability or security concerns.

Since this is a very common pattern to start with, you might find the following article on
migration useful for moving your data to your SQL Server VM: Migration guide: SQL
Server to SQL Server on Azure Virtual Machines.

3-tier (simple): Multiple virtual machines


In this application pattern, you deploy a 3-tier application in Azure by placing each
application tier in a different virtual machine. This provides a flexible environment for an
easy scale-up and scale-out scenarios. When one virtual machine contains your
client/web application, the other one hosts your business components, and the other
one hosts the database server.

This application pattern is useful when:

You want to perform a migration of complex database applications to Azure Virtual


Machines.
You want different application tiers to be hosted in different regions. For example,
you might have shared databases that are deployed to multiple regions for
reporting purposes.
You want to move enterprise applications from on-premises virtualized platforms
to Azure Virtual Machines. For a detailed discussion on enterprise applications, see
What is an Enterprise Application.
You want to quickly provision development and test environments for short
periods of time.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.

The following diagram demonstrates how you can place a simple 3-tier application in
Azure by placing each application tier in a different virtual machine.

In this application pattern, there is only one virtual machine in each tier. If you have
multiple VMs in Azure, we recommend that you set up a virtual network. Azure Virtual
Network creates a trusted security boundary and also allows VMs to communicate
among themselves over the private IP address. In addition, always make sure that all
Internet connections only go to the presentation tier. When following this application
pattern, manage the network security group rules to control access. For more
information, see Allow external access to your VM using the Azure portal.

In the diagram, Internet Protocols can be TCP, UDP, HTTP, or HTTPS.

7 Note

Setting up a virtual network in Azure is free of charge. However, you are charged
for the VPN gateway that connects to on-premises. This charge is based on the
amount of time that connection is provisioned and available.

2-tier and 3-tier with presentation tier scale-


out
In this application pattern, you deploy 2-tier or 3-tier database application to Azure
Virtual Machines by placing each application tier in a different virtual machine. In
addition, you scale out the presentation tier due to increased volume of incoming client
requests.

This application pattern is useful when:

You want to move enterprise applications from on-premises virtualized platforms


to Azure Virtual Machines.
You want to scale out the presentation tier due to increased volume of incoming
client requests.
You want to quickly provision development and test environments for short
periods of time.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.
You want to own an infrastructure environment that can scale up and down on
demand.

The following diagram demonstrates how you can place the application tiers in multiple
virtual machines in Azure by scaling out the presentation tier due to increased volume
of incoming client requests. As seen in the diagram, Azure Load Balancer is responsible
for distributing traffic across multiple virtual machines and also determining which web
server to connect to. Having multiple instances of the web servers behind a load
balancer ensures the high availability of the presentation tier.
Best practices for 2-tier, 3-tier, or n-tier patterns that have
multiple VMs in one tier
It's recommended that you place the virtual machines that belong to the same tier in
the same cloud service and in the same the availability set. For example, place a set of
web servers in CloudService1 and AvailabilitySet1 and a set of database servers in
CloudService2 and AvailabilitySet2. An availability set in Azure enables you to place the
high availability nodes into separate fault domains and upgrade domains.

To leverage multiple VM instances of a tier, you need to configure Azure Load Balancer
between application tiers. To configure Load Balancer in each tier, create a load-
balanced endpoint on each tier's VMs separately. For a specific tier, first create VMs in
the same cloud service. This ensures that they have the same public Virtual IP address.
Next, create an endpoint on one of the virtual machines on that tier. Then, assign the
same endpoint to the other virtual machines on that tier for load balancing. By creating
a load-balanced set, you distribute traffic across multiple virtual machines and also allow
the Load Balancer to determine which node to connect when a backend VM node fails.
For example, having multiple instances of the web servers behind a load balancer
ensures the high availability of the presentation tier.
As a best practice, always make sure that all internet connections first go to the
presentation tier. The presentation layer accesses the business tier, and then the
business tier accesses the data tier. For more information on how to allow access to the
presentation layer, see Allow external access to your VM using the Azure portal.

Note that the Load Balancer in Azure works similar to load balancers in an on-premises
environment. For more information, see Load balancing for Azure infrastructure services.

In addition, we recommend that you set up a private network for your virtual machines
by using Azure Virtual Network. This allows them to communicate among themselves
over the private IP address. For more information, see Azure Virtual Network.

2-tier and 3-tier with business tier scale-out


In this application pattern, you deploy a 2-tier or 3-tier database application to Azure
Virtual Machines by placing each application tier in a different virtual machine. In
addition, you might want to distribute the application server components to multiple
virtual machines due to the complexity of your application.

This application pattern is useful when:

You want to move enterprise applications from on-premises virtualized platforms


to Azure Virtual Machines.
You want to distribute the application server components to multiple virtual
machines due to the complexity of your application.
You want to move business logic heavy on-premises LOB (line-of-business)
applications to Azure Virtual Machines. LOB applications are a set of critical
computer applications that are vital to running an enterprise, such as accounting,
human resources (HR), payroll, supply chain management, and resource planning
applications.
You want to quickly provision development and test environments for short
periods of time.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.
You want to own an infrastructure environment that can scale up and down on
demand.

The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you place the application tiers in multiple virtual machines in
Azure by scaling out the business tier, which contains the business logic tier and data
access components. As seen in the diagram, Azure Load Balancer is responsible for
distributing traffic across multiple virtual machines and also determining which web
server to connect to. Having multiple instances of the application servers behind a load
balancer ensures the high availability of the business tier. For more information, see Best
practices for 2-tier, 3-tier, or n-tier application patterns that have multiple virtual
machines in one tier.

2-tier and 3-tier with presentation and business


tiers scale-out and HADR
In this application pattern, you deploy a 2-tier or 3-tier database application to Azure
Virtual Machines by distributing the presentation tier (web server) and the business tier
(application server) components to multiple virtual machines. In addition, you
implement high-availability and disaster recovery (HADR) solutions for your databases in
Azure Virtual Machines.

This application pattern is useful when:

You want to move enterprise applications from virtualized platforms on-premises


to Azure by implementing SQL Server high availability and disaster recovery
capabilities.
You want to scale out the presentation tier and the business tier due to increased
volume of incoming client requests and the complexity of your application.
You want to quickly provision development and test environments for short
periods of time.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.
You want to own an infrastructure environment that can scale up and down on
demand.

The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you scale out the presentation tier and the business tier
components in multiple virtual machines in Azure. In addition, you implement high
availability and disaster recovery (HADR) techniques for SQL Server databases in Azure.

Running multiple copies of an application in different VMs make sure that you are load
balancing requests across them. When you have multiple virtual machines, you need to
make sure that all your VMs are accessible and running at one point in time. If you
configure load balancing, Azure Load Balancer tracks the health of VMs and directs
incoming calls to the healthy functioning VM nodes properly. For information on how to
set up load balancing of the virtual machines, see Load balancing for Azure
infrastructure services. Having multiple instances of web and application servers behind
a load balancer ensures the high availability of the presentation and business tiers.
Best practices for application patterns requiring SQL
HADR
When you set up SQL Server high availability and disaster recovery solutions in Azure
Virtual Machines, setting up a virtual network for your virtual machines using Azure
Virtual Network is mandatory. Virtual machines within a Virtual Network will have a
stable private IP address even after a service downtime, thus you can avoid the update
time required for DNS name resolution. In addition, the virtual network allows you to
extend your on-premises network to Azure and creates a trusted security boundary. For
example, if your application has corporate domain restrictions (such as, Windows
authentication, Active Directory), setting up Azure Virtual Network is necessary.

Most of customers, who are running production code on Azure, are keeping both
primary and secondary replicas in Azure.

For comprehensive information and tutorials on high availability and disaster recovery
techniques, see High Availability and Disaster Recovery for SQL Server on Azure Virtual
Machines.
2-tier and 3-tier using Azure Virtual Machines
and Cloud Services
In this application pattern, you deploy 2-tier or 3-tier application to Azure by using both
Azure Cloud Services (web and worker roles - Platform as a Service (PaaS)) and Azure
Virtual Machines (Infrastructure as a Service (IaaS)). Using Azure Cloud Services for the
presentation tier/business tier and SQL Server in Azure Virtual Machines for the data tier
is beneficial for most applications running on Azure. The reason is that having a
compute instance running on Cloud Services provides an easy management,
deployment, monitoring, and scale-out.

With Cloud Services, Azure maintains the infrastructure for you, performs routine
maintenance, patches the operating systems, and attempts to recover from service and
hardware failures. When your application needs scale-out, automatic, and manual scale-
out options are available for your cloud service project by increasing or decreasing the
number of instances or virtual machines that are used by your application. In addition,
you can use on-premises Visual Studio to deploy your application to a cloud service
project in Azure.

In summary, if you don't want to own extensive administrative tasks for the
presentation/business tier and your application does not require any complex
configuration of software or the operating system, use Azure Cloud Services. If Azure
SQL Database does not support all the features you are looking for, use SQL Server in an
Azure virtual machine for the data tier. Running an application on Azure Cloud Services
and storing data in Azure Virtual Machines combines the benefits of both services. For a
detailed comparison, see the section in this topic on Comparing development strategies
in Azure.

In this application pattern, the presentation tier includes a web role, which is a Cloud
Services component running in the Azure execution environment and it is customized
for web application programming as supported by IIS and ASP.NET. The business or
backend tier includes a worker role, which is a Cloud Services component running in the
Azure execution environment and it is useful for generalized development, and may
perform background processing for a web role. The database tier resides in a SQL Server
virtual machine in Azure. The communication between the presentation tier and the
database tier happens directly or over the business tier – worker role components.

This application pattern is useful when:

You want to move enterprise applications from virtualized platforms on-premises


to Azure by implementing SQL Server high availability and disaster recovery
capabilities.
You want to own an infrastructure environment that can scale up and down on
demand.
Azure SQL Database does not support all the features that your application or
database needs.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.

The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you place the presentation tier in web roles, the business tier in
worker roles but the data tier in virtual machines in Azure. Running multiple copies of
the presentation tier in different web roles ensures to load balance requests across
them. When you combine Azure Cloud Services with Azure Virtual Machines, we
recommend that you set up Azure Virtual Network as well. With Azure Virtual Network,
you can have stable and persistent private IP addresses within the same cloud service in
the cloud. Once you define a virtual network for your virtual machines and cloud
services, they can start communicating among themselves over the private IP address. In
addition, having virtual machines and Azure web/worker roles in the same Azure Virtual
Network provides low latency and more secure connectivity. For more information, see
What is a cloud service.

As seen in the diagram, Azure Load Balancer distributes traffic across multiple virtual
machines and also determines which web server or application server to connect to.
Having multiple instances of the web and application servers behind a load balancer
ensures the high availability of the presentation tier and the business tier. For more
information, see Best practices for application patterns requiring SQL HADR.
Another approach to implement this application pattern is to use a consolidated web
role that contains both presentation tier and business tier components as shown in the
following diagram. This application pattern is useful for applications that require stateful
design. Since Azure provides stateless compute nodes on web and worker roles, we
recommend that you implement a logic to store session state using one of the following
technologies: Azure Caching, Azure Table Storage or Azure SQL Database.
Pattern with Azure Virtual Machines, Azure SQL
Database, and Azure App Service (Web Apps)
The primary goal of this application pattern is to show you how to combine Azure
infrastructure as a service (IaaS) components with Azure platform-as-a-service
components (PaaS) in your solution. This pattern is focused on Azure SQL Database for
relational data storage. It does not include SQL Server in an Azure virtual machine, which
is part of the Azure infrastructure as a service offering.

In this application pattern, you deploy a database application to Azure by placing the
presentation and business tiers in the same virtual machine and accessing a database in
Azure SQL Database (SQL Database) servers. You can implement the presentation tier by
using traditional IIS-based web solutions. Or, you can implement a combined
presentation and business tier by using Azure App Service.

This application pattern is useful when:

You already have an existing SQL Database server configured in Azure and you
want to test your application quickly.
You want to test the capabilities of Azure environment.
You want to quickly provision development and test environments for short
periods of time.
Your business logic and data access components can be self-contained within a
web application.

The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you place the application tiers in a single virtual machine in
Azure and access data in Azure SQL Database.

If you choose to implement a combined web and application tier by using Azure Web
Apps, we recommend that you keep the middle-tier or application tier as dynamic-link
libraries (DLLs) in the context of a web application.

In addition, review the recommendations given in the Comparing web development


strategies in Azure section at the end of this article to learn more about programming
techniques.

N-tier hybrid application pattern


In n-tier hybrid application pattern, you implement your application in multiple tiers
distributed between on-premises and Azure. Therefore, you create a flexible and
reusable hybrid system, which you can modify or add a specific tier without changing
the other tiers. To extend your corporate network to the cloud, you use Azure Virtual
Network service.

This hybrid application pattern is useful when:

You want to build applications that run partly in the cloud and partly on-premises.
You want to migrate some or all elements of an existing on-premises application to
the cloud.
You want to move enterprise applications from on-premises virtualized platforms
to Azure.
You want to own an infrastructure environment that can scale up and down on
demand.
You want to quickly provision development and test environments for short
periods of time.
You want a cost effective way to take backups for enterprise database applications.

The following diagram demonstrates an n-tier hybrid application pattern that spans
across on-premises and Azure. As shown in the diagram, on-premises infrastructure
includes Active Directory Domain Services domain controller to support user
authentication and authorization. Note that the diagram demonstrates a scenario, where
some parts of the data tier live in an on-premises data center whereas some parts of the
data tier live in Azure. Depending on your application's needs, you can implement
several other hybrid scenarios. For example, you might keep the presentation tier and
the business tier in an on-premises environment but the data tier in Azure.
In Azure, you can use Active Directory as a standalone cloud directory for your
organization, or you can also integrate existing on-premises Active Directory with Azure
Active Directory. As seen in the diagram, the business tier components can access to
multiple data sources, such as to SQL Server in Azure via a private internal IP address, to
on-premises SQL Server via Azure Virtual Network, or to SQL Database using the .NET
Framework data provider technologies. In this diagram, Azure SQL Database is an
optional data storage service.

In n-tier hybrid application pattern, you can implement the following workflow in the
order specified:

1. Identify enterprise database applications that need to be moved up to cloud by


using the Microsoft Assessment and Planning (MAP) Toolkit . The MAP Toolkit
gathers inventory and performance data from computers you are considering for
virtualization and provides recommendations on capacity and assessment
planning.

2. Plan the resources and configuration needed in the Azure platform, such as
storage accounts and virtual machines.

3. Set up network connectivity between the corporate network on-premises and


Azure Virtual Network. To set up the connection between the corporate network
on-premises and a virtual machine in Azure, use one of the following two methods:
a. Establish a connection between on-premises and Azure via public end points on
a virtual machine in Azure. This method provides an easy setup and enables you
to use SQL Server authentication in your virtual machine. In addition, set up
your network security group rules to control public traffic to the VM. For more
information, see Allow external access to your VM using the Azure portal.

b. Establish a connection between on-premises and Azure via Azure Virtual Private
network (VPN) tunnel. This method allows you to extend domain policies to a
virtual machine in Azure. In addition, you can set up firewall rules and use
Windows authentication in your virtual machine. Currently, Azure supports
secure site-to-site VPN and point-to-site VPN connections:

With secure site-to-site connection, you can establish network connectivity


between your on-premises network and your virtual network in Azure. It is
recommended for connecting your on-premises data center environment
to Azure.
With secure point-to-site connection, you can establish network
connectivity between your virtual network in Azure and your individual
computers running anywhere. It is mostly recommended for development
and test purposes.

For information on how to connect to SQL Server in Azure, see Connect to a


SQL Server virtual machine on Azure.

4. Set up scheduled jobs and alerts that back up on-premises data in a virtual
machine disk in Azure. For more information, see SQL Server Backup and Restore
with Azure Blob Storage and Backup and Restore for SQL Server on Azure Virtual
Machines.

5. Depending on your application's needs, you can implement one of the following
three common scenarios:
a. You can keep your web server, application server, and insensitive data in a
database server in Azure whereas you keep the sensitive data on-premises.
b. You can keep your web server and application server on-premises whereas the
database server in a virtual machine in Azure.
c. You can keep your database server, web server, and application server on-
premises whereas you keep the database replicas in virtual machines in Azure.
This setting allows the on-premises web servers or reporting applications to
access the database replicas in Azure. Therefore, you can achieve to lower the
workload in an on-premises database. We recommend that you implement this
scenario for heavy read workloads and developmental purposes. For
information on creating database replicas in Azure, see Always On Availability
Groups at High Availability and Disaster Recovery for SQL Server on Azure
Virtual Machines.

Comparing web development strategies in


Azure
To implement and deploy a multi-tier SQL Server-based application in Azure, you can
use one of the following two programming methods:

Set up a traditional web server (IIS - Internet Information Services) in Azure and
access databases in SQL Server on Azure Virtual Machines.
Implement and deploy a cloud service to Azure. Then, make sure that this cloud
service can access databases in SQL Server on Azure Virtual Machines. A cloud
service can include multiple web and worker roles.

The following table provides a comparison of traditional web development with Azure
Cloud Services and Azure Web Apps with respect to SQL Server on Azure Virtual
Machines. The table includes Azure Web Apps as it is possible to use SQL Server in an
Azure VM as a data source for Azure Web Apps via its public virtual IP address or DNS
name.

Traditional web development Cloud services Web hosting with Azure


in Azure Virtual Machines in Azure Web Apps

Application Existing applications as-is. Applications Existing applications as-is


migration need web and but suited for self-
from on- worker roles. contained web
premises applications and web
services that require
quick scalability.
Traditional web development Cloud services Web hosting with Azure
in Azure Virtual Machines in Azure Web Apps

Development Visual Studio, WebMatrix, Visual Visual Studio, Visual Studio, WebMatrix,
and Web Developer, WebDeploy, Azure SDK, TFS, Visual Web Developer,
deployment FTP, TFS, IIS Manager, PowerShell. FTP, GIT, BitBucket,
PowerShell. Each cloud CodePlex, DropBox,
service has two GitHub, Mercurial, TFS,
environments Web Deploy, PowerShell.
to which you
can deploy your
service package
and
configuration:
staging and
production. You
can deploy a
cloud service to
the staging
environment to
test it before
you promote it
to production.

Administration You are responsible for You are You are responsible for
and setup administrative tasks on the responsible for administrative tasks on
application, data, firewall rules, administrative the application and data
virtual network, and operating tasks on the only.
system. application,
data, firewall
rules, and
virtual network.

High We recommend that you place Azure manages High Availability is


availability virtual machines in the same the failures inherited from Azure
and disaster availability set and in the same resulting from worker roles, Azure Blob
recovery cloud service. Keeping your VMs the underlying Storage, and Azure SQL
(HADR) in the same availability set hardware or Database. For example,
allows Azure to place the high operating Azure Storage maintains
availability nodes into separate system three replicas of all blob,
fault domains and upgrade software. We table, and queue data. At
domains. Similarly, keeping your recommend any one time, Azure SQL
VMs in the same cloud service that you Database keeps three
enables load balancing and VMs implement replicas of data running
can communicate directly with multiple —one primary replica
one another over the local instances of a and two secondary
network within an Azure data web or worker replicas. For more
center.
role to ensure information, see Azure
the high Storage and Azure SQL
You are responsible for availability of Database.

Traditional web development Cloud services Web hosting with Azure


in Azure Virtual Machines in Azure Web Apps

implementing a high availability your


and disaster recovery solution application. For When using SQL Server
for SQL Server on Azure Virtual information, see in an Azure VM as a data
Machines to avoid any Cloud Services, source for Azure Web
downtime. For supported HADR Virtual Apps, keep in mind that
technologies, see High Machines, and Azure Web Apps does
Availability and Disaster Virtual Network not support Azure Virtual
Recovery for SQL Server on Service Level Network. In other words,
Azure Virtual Machines.
Agreement .
all connections from
Azure Web Apps to SQL
You are responsible for backing You are Server VMs in Azure must
up your own data and responsible for go through public end
application.
backing up your points of virtual
own data and machines. This might
Azure can move your virtual application.
cause some limitations
machines if the host machine in for high availability and
the data center fails due to For databases disaster recovery
hardware issues. In addition, residing in a scenarios. For example,
there could be planned SQL Server the client application on
downtime of your VM when the database in an Azure Web Apps
host machine is updated for Azure VM, you connecting to SQL Server
security or software updates. are responsible VM with database
Therefore, we recommend that for mirroring would not be
you maintain at least two VMs implementing a able to connect to the
in each application tier to high availability new primary server as
ensure the continuous and disaster database mirroring
availability. Azure does not recovery requires that you set up
provide SLA for a single virtual solution to Azure Virtual Network
machine. avoid any between SQL Server host
downtime. For VMs in Azure. Therefore,
supported using SQL Server
HDAR Database Mirroring with
technologies, Azure Web Apps is not
see High supported currently.

Availability and
Disaster SQL Server Always On
Recovery for Availability Groups: You
SQL Server on can set up Always On
Azure Virtual Availability Groups when
Machines.
using Azure Web Apps
with SQL Server VMs in
SQL Server Azure. But you need to
Database configure Always On
Mirroring: Use Availability Group
with Azure Listener to route the
Cloud Services communication to the
(web/worker
Traditional web development Cloud services Web hosting with Azure
in Azure Virtual Machines in Azure Web Apps

roles). SQL primary replica via public


Server VMs and load-balanced endpoints.
a cloud service
project can be
in the same
Azure Virtual
Network. If SQL
Server VM is
not in the same
Virtual Network,
you need to
create a SQL
Server Alias to
route
communication
to the instance
of SQL Server.
In addition, the
alias name must
match the SQL
Server name.

Cross- You can use Azure Virtual You can use Azure Virtual Network is
premises Network to connect to on- Azure Virtual supported.
connectivity premises. Network to
connect to on-
premises.

Scalability Scale-up is available by Scale-up is Scale up and down: You


increasing the virtual machine available by can increase/decrease
sizes or adding more disks. For using multiple the size of the instance
more information about virtual web and worker (VM) reserved for your
machine sizes, see Virtual roles. For more web site.

machine Sizes for Azure.


information
about virtual Scale out: You can add
For Database Server: Scale-out machine sizes more reserved instances
is available via database for web roles (VMs) for your web site.

partitioning techniques and SQL and worker


Server Always On Availability roles, see You can set up AutoScale
groups.
Configure Sizes on the portal as well as
for Cloud the schedule times. For
For heavy read workloads, you Services.
more information, see
can use Always On Availability How to Scale Web Apps.
Groups on multiple secondary When using
nodes as well as SQL Server Cloud Services,
Replication.
you can define
multiple roles
Traditional web development Cloud services Web hosting with Azure
in Azure Virtual Machines in Azure Web Apps

For heavy write workloads, you to distribute


can implement horizontal processing and
partitioning data across multiple also achieve
physical servers to provide flexible scaling
application scale-out.
of your
application.
In addition, you can implement Each cloud
a scale-out by using SQL Server service includes
with Data Dependent Routing. one or more
With Data Dependent Routing web roles
(DDR), you need to implement and/or worker
the partitioning mechanism in roles, each with
the client application, typically its own
in the business tier layer, to application files
route the database requests to and
multiple SQL Server nodes. The configuration.
business tier contains mappings You can scale-
to how the data is partitioned up a cloud
and which node contains the service by
data.
increasing the
number of role
You can scale applications that instances
are running virtual machines. (virtual
For more information, see How machines)
to Scale an Application.
deployed for a
role and scale-
Important Note: The AutoScale down a cloud
feature in Azure allows you to service by
automatically increase or decreasing the
decrease the virtual machines number of role
that are used by your instances. For
application. This feature detailed
guarantees that the end-user information, see
experience is not affected Azure Execution
negatively during peak periods, Models.

and VMs are shut down when


the demand is low. It's Scale-out is
recommended that you do not available via
set the AutoScale option for built-in Azure
your cloud service if it includes high availability
SQL Server VMs. The reason is support
that the AutoScale feature lets through Cloud
Azure to turn on a virtual Services, Virtual
machine when the CPU usage in Machines, and
that VM is higher than some Virtual Network
threshold, and to turn off a Service Level
virtual machine when the CPU Agreement
Traditional web development Cloud services Web hosting with Azure
in Azure Virtual Machines in Azure Web Apps

usage goes lower than it. The and Load


AutoScale feature is useful for Balancer.

stateless applications, such as


web servers, where any VM can For a multi-tier
manage the workload without application, we
any references to any previous recommend
state. However, the AutoScale that you
feature is not useful for stateful connect
applications, such as SQL Server, web/worker
where only one instance allows roles
writing to the database. application to
database server
VMs via Azure
Virtual Network.
In addition,
Azure provides
load balancing
for VMs in the
same cloud
service,
spreading user
requests across
them. Virtual
machines
connected in
this way can
communicate
directly with
one another
over the local
network within
an Azure data
center.

You can set up


AutoScale on
the Azure portal
as well as the
schedule times.
For more
information, see
How to
configure auto
scaling for a
Cloud Service in
the portal.
For more information on choosing between these programming methods, see Azure
Web Apps, Cloud Services, and VMs: When to use which.

Next steps
For more information on running SQL Server on Azure Virtual Machines, see SQL Server
on Azure Virtual Machines Overview.
Collect baseline: Performance best
practices for SQL Server on Azure VM
Article • 12/16/2022

Applies to:
SQL Server on Azure VM

This article provides information to collect a performance baseline as a series of best


practices and guidelines to optimize performance for your SQL Server on Azure Virtual
Machines (VMs).

There is typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure Virtual Machines. If your workload is less
demanding, you might not require every recommended optimization. Consider your
performance needs, costs, and workload patterns as you evaluate these
recommendations.

Overview
For a prescriptive approach, gather performance counters using PerfMon/LogMan and
capture SQL Server wait statistics to better understand general pressures and potential
bottlenecks of the source environment.

Start by collecting the CPU, memory, IOPS, throughput, and latency of the source
workload at peak times following the application performance checklist.

Gather data during peak hours such as workloads during your typical business day, but
also other high load processes such as end-of-day processing, and weekend ETL
workloads. Consider scaling up your resources for atypically heavily workloads, such as
end-of-quarter processing, and then scale done once the workload completes.

Use the performance analysis to select the VM Size that can scale to your workload's
performance requirements.

Storage
SQL Server performance depends heavily on the I/O subsystem and storage
performance is measured by IOPS and throughput. Unless your database fits into
physical memory, SQL Server constantly brings database pages in and out of the buffer
pool. The data files for SQL Server should be treated differently. Access to log files is
sequential except when a transaction needs to be rolled back where data files, including
tempdb , are randomly accessed. If you have a slow I/O subsystem, your users may
experience performance issues such as slow response times and tasks that do not
complete due to time-outs.

The Azure Marketplace virtual machines have log files on a physical disk that is separate
from the data files by default. The tempdb data files count and size meet best practices
and are targeted to the ephemeral D:\ drive.

The following PerfMon counters can help validate the IO throughput required by your
SQL Server:

\LogicalDisk\Disk Reads/Sec (read IOPS)


\LogicalDisk\Disk Writes/Sec (write IOPS)
\LogicalDisk\Disk Read Bytes/Sec (read throughput requirements for the data,
log, and tempdb files)
\LogicalDisk\Disk Write Bytes/Sec (write throughput requirements for the data,
log, and tempdb files)

Using IOPS and throughput requirements at peak levels, evaluate VM sizes that match
the capacity from your measurements.

If your workload requires 20K read IOPS and 10K write IOPS, you can either choose
E16s_v3 (with up to 32K cached and 25600 uncached IOPS) or M16_s (with up to 20K
cached and 10K uncached IOPS) with 2 P30 disks striped using Storage Spaces.

Make sure to understand both throughput and IOPS requirements of the workload as
VMs have different scale limits for IOPS and throughput.

Memory
Track both external memory used by the OS as well as the memory used internally by
SQL Server. Identifying pressure for either component will help size virtual machines and
identify opportunities for tuning.

The following PerfMon counters can help validate the memory health of a SQL Server
virtual machine:

\Memory\Available MBytes
\SQLServer:Memory Manager\Target Server Memory (KB)
\SQLServer:Memory Manager\Total Server Memory (KB)
\SQLServer:Buffer Manager\Lazy writes/sec
\SQLServer:Buffer Manager\Page life expectancy
Compute
Compute in Azure is managed differently than on-premises. On-premises servers are
built to last several years without an upgrade due to the management overhead and
cost of acquiring new hardware. Virtualization mitigates some of these issues but
applications are optimized to take the most advantage of the underlying hardware,
meaning any significant change to resource consumption requires rebalancing the
entire physical environment.

This is not a challenge in Azure where a new virtual machine on a different series of
hardware, and even in a different region, is easy to achieve.

In Azure, you want to take advantage of as much of the virtual machines resources as
possible, therefore, Azure virtual machines should be configured to keep the average
CPU as high as possible without impacting the workload.

The following PerfMon counters can help validate the compute health of a SQL Server
virtual machine:

\Processor Information(_Total)% Processor Time


\Process(sqlservr)% Processor Time

7 Note

Ideally, try to aim for using 80% of your compute, with peaks above 90% but not
reaching 100% for any sustained period of time. Fundamentally, you only want to
provision the compute the application needs and then plan to scale up or down as
the business requires.

Next steps
To learn more, see the other articles in this best practices series:

Quick checklist
VM size
Storage
Security
HADR settings

For security best practices, see Security considerations for SQL Server on Azure Virtual
Machines.
Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see the
Frequently Asked Questions.
Run SQL Server VM on an Azure
Dedicated Host
Article • 07/10/2023

Applies to: SQL Server on Azure VM

This article details the specifics of using a SQL Server virtual machine (VM) with Azure
Dedicated Host. Additional information about Azure Dedicated Host can be found in the
blog post Introducing Azure Dedicated Host .

Overview
Azure Dedicated Host is a service that provides physical servers - able to host one or
more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the
same physical servers used in Microsoft's data centers, provided as a resource. You can
provision dedicated hosts within a region, availability zone, and fault domain. Then, you
can place VMs directly into your provisioned hosts, in whatever configuration best
meets your needs.

Limitations
Not all VM series are supported on dedicated hosts, and VM series availability
varies by region. For more information, see Overview of Azure Dedicated Hosts.

Licensing
You can choose between two different licensing options when you place your SQL
Server VM in an Azure Dedicated Host.

SQL VM licensing: This is the existing licensing option, where you pay for each SQL
Server VM license individually.
Dedicated host licensing: The new licensing model available for the Azure
Dedicated Host, where SQL Server licenses are bundled and paid for at the host
level.

Host-level options for using existing SQL Server licenses:

SQL Server Enterprise Edition Azure Hybrid Benefit (AHB)


Available to customers with SA or subscription.
License all available physical cores and enjoy unlimited virtualization (up to the
max vCPUs supported by the host).
For more information about applying the AHB to Azure Dedicated Host, see
Azure Hybrid Benefit FAQ .
SQL Server licenses acquired before October 1
SQL Server Enterprise edition has both host-level and by-VM license options.
SQL Server Standard edition has only a by-VM license option available.
For details, see Microsoft Product Terms .
If no SQL Server dedicated host-level option is selected, you may select SQL Server
AHB at the level of individual VMs, just as you would with multi-tenant VMs.

Provisioning
Provisioning a SQL Server VM to the dedicated host is no different than any other Azure
virtual machine. You can do so using Azure PowerShell, the Azure portal, and the Azure
CLI.

The process of adding an existing SQL Server VM to the dedicated host requires
downtime, but will not affect data, and will not have data loss. Nonetheless, all
databases, including system databases, should be backed up prior to the move.

Virtualization
One of the benefits of a dedicated host is unlimited virtualization. For example, you can
have licenses for 64 vCores, but you can configure the host to have 128 vCores, so you
get double the vCores but pay only half of what you would for the SQL Server licenses.

Because since it's your host, you are eligible to set the virtualization with a 1:2 ratio.

FAQ
Q: How does the Azure Hybrid Benefit work for Windows Server/SQL Server licenses
on Azure Dedicated Host?

A: Customers can use the value of their existing Windows Server and SQL Server licenses
with Software Assurance, or qualifying subscription licenses, to pay a reduced rate on
Azure Dedicated Host using Azure Hybrid Benefit. Windows Server Datacenter and SQL
Server Enterprise Edition customers get unlimited virtualization (deploy as many
Windows Server virtual machines as possible on the host subject to the physical capacity
of the underlying server) when they license the entire host and use Azure Hybrid Benefit.
All Windows Server and SQL Server workloads in Azure Dedicated Host are also eligible
for Extended Security Updates for Windows Server and SQL Server 2012 at no additional
charge.

Next steps
For more information, see the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Windows VM
What's new for SQL Server on Azure VMs
Extend support for SQL Server with
Azure
Article • 07/10/2023

Applies to: SQL Server on Azure VM

SQL Server 2012 has reached the end of its support (EOS) life cycle. Because many
customers are still using this version, we're providing several options to continue getting
support. You can migrate your on-premises SQL Server instances to Azure virtual
machines (VMs), migrate to Azure SQL Database, or stay on-premises and purchase
extended security updates.

Unlike with a managed instance, migrating to an Azure VM does not require recertifying
your applications. And unlike with staying on-premises, you'll receive free extended
security patches by migrating to an Azure VM.

The rest of this article provides considerations for migrating your SQL Server instance to
an Azure VM.

For more information about end of support options, see End of support.

Provisioning
There is a pay-as-you-go SQL Server 2012 on Windows Server 2012 R2 image available
on Azure Marketplace.

7 Note

SQL Server 2008 and SQL Server 2008 R2 are out of extended support and no
longer available from the Azure Marketplace.

Customers who are on an earlier version of SQL Server will need to either self-install or
upgrade to SQL Server 2012. Likewise, customers on an earlier version of Windows
Server will need to either deploy their VM from a custom VHD or upgrade to Windows
Server 2012 R2.

Images deployed through Azure Marketplace come with the SQL IaaS Agent extension
pre-installed. The SQL IaaS Agent extension is a requirement for flexible licensing and
automated patching. Customers who deploy self-installed VMs will need to manually
install the SQL IaaS Agent extension.
7 Note

Although the SQL Server Create and Manage options will work with the SQL Server
2012 image in the Azure portal, the following features are not supported: Automatic
backups, Azure Key Vault integration, and R Services.

Licensing
Pay-as-you-go SQL Server 2012 deployments can convert to Azure Hybrid Benefit .

To convert a Software Assurance (SA)-based license to pay-as-you-go, customers should


register with the SQL IaaS Agent extension. After that registration, the SQL license type
will be interchangeable between Azure Hybrid Benefit and pay-as-you-go.

Self-installed SQL Server 2012 instances on an Azure VM can register with the SQL IaaS
Agent extension and convert their license type to pay-as-you-go.

Migration
You can migrate EOS SQL Server instances to an Azure VM with manual backup/restore
methods. This is the most common migration method from on-premises to an Azure
VM.

Azure Site Recovery


For bulk migrations, we recommend the Azure Site Recovery service. With Azure Site
Recovery, customers can replicate the whole VM, including SQL Server from on-
premises to Azure VM.

SQL Server requires app-consistent Azure Site Recovery snapshots to guarantee


recovery. Azure Site Recovery supports app-consistent snapshots with a minimum 1-
hour interval. The minimum recovery point objective (RPO) possible for SQL Server with
Azure Site Recovery migrations is 1 hour. The recovery time objective (RTO) is 2 hours
plus SQL Server recovery time.

Database Migration Service


The Azure Database Migration Service is an option for customers if they're migrating
from on-premises to an Azure VM by upgrading SQL Server to the 2012 version or later.
Disaster recovery
Disaster recovery solutions for EOS SQL Server on an Azure VM are as follows:

SQL Server backups: Use Azure Backup to help protect your EOS SQL Server 2012
against ransomware, accidental deletion, and corruption with a 15-minute RPO and
point-in-time recovery. For more details, see this article.

Log shipping: You can create a log shipping replica in another zone or Azure
region with continuous restores to reduce the RTO. You need to manually
configure log shipping.

Azure Site Recovery: You can replicate your VM between zones and regions
through Azure Site Recovery replication. SQL Server requires app-consistent
snapshots to guarantee recovery in case of a disaster. Azure Site Recovery offers a
minimum 1-hour RPO and a 2-hour (plus SQL Server recovery time) RTO for EOS
SQL Server disaster recovery.

Security patching
Extended security updates for SQL Server VMs are delivered through the Microsoft
Windows Update channels after the SQL Server VM has been registered with the SQL
IaaS Agent extension. Patches can be downloaded manually or automatically.

7 Note

Registration with the SQL IaaS Agent extension is not required for manual
installation of extended security updates on Azure virtual machines. Microsoft
Update will automatically detect that the VM is running in Azure and present the
relevant updates for download even if the extension is not present.

Automated patching is enabled by default. Automated patching allows Azure to


automatically patch SQL Server and the operating system. You can specify a day of the
week, time, and duration for a maintenance window if the SQL Server IaaS extension is
installed. Azure performs patching in this maintenance window. The maintenance
window schedule uses the VM locale for time. For more information, see Automated
patching for SQL Server on Azure Virtual Machines.

Azure Update management as of today does not detect patches for SQL Server
Marketplace images. You should look under Windows Updates to apply SQL Server
updates in this case.
Next steps
Migration guide: SQL Server to SQL Server on Azure Virtual Machines
Create a SQL Server VM in the Azure portal
FAQ for SQL Server on Azure Virtual Machines

Find out more about end of support options and Extended Security Updates.
Connect to a SQL Server virtual machine
on Azure
Article • 06/28/2023

Applies to:
SQL Server on Azure VM

Overview
This article describes how to connect to your SQL on Azure virtual machine (VM). It
covers some general connectivity scenarios. If you need to troubleshoot or configure
connectivity outside of the portal, see the manual configuration at the end of this topic.

If you would rather have a full walkthrough of both provisioning and connectivity, see
Provision a SQL Server virtual machine on Azure.

Connection scenarios
The way a client connects to a SQL Server VM differs depending on the location of the
client and the networking configuration.

If you provision a SQL Server VM in the Azure portal, you have the option of specifying
the type of SQL connectivity.

Your options for connectivity include:


Option Description

Public Connect to SQL Server over the internet.

Private Connect to SQL Server in the same virtual network.

Local Connect to SQL Server locally on the same virtual machine.

The following sections explain the Public and Private options in more detail.

Connect to SQL Server over the internet


If you want to connect to your SQL Server database engine from the internet, select
Public for the SQL connectivity type in the portal during provisioning. The portal
automatically does the following steps:

Enables the TCP/IP protocol for SQL Server.


Configures a firewall rule to open the SQL Server TCP port (default 1433).
Enables SQL Server authentication, required for public access.
Configures the network security group on the VM to all TCP traffic on the SQL
Server port.

) Important

The virtual machine images for the SQL Server Developer and Express editions do
not automatically enable the TCP/IP protocol. For Developer and Express editions,
you must use SQL Server Configuration Manager to manually enable the TCP/IP
protocol after creating the VM.

Any client with internet access can connect to the SQL Server instance by specifying
either the public IP address of the virtual machine or any DNS label assigned to that IP
address. If the SQL Server port is 1433, you do not need to specify it in the connection
string. The following connection string connects to a SQL VM with a DNS label of
sqlvmlabel.eastus.cloudapp.azure.com using SQL authentication (you could also use the

public IP address).

text

Server=sqlvmlabel.eastus.cloudapp.azure.com;Integrated Security=false;User
ID=<login_name>;Password=<your_password>

Although this string enables connectivity for clients over the internet, this does not
imply that anyone can connect to your SQL Server instance. Outside clients have to use
the correct username and password. However, for additional security, you can avoid the
well-known port 1433. For example, if you were to configure SQL Server to listen on port
1500 and establish proper firewall and network security group rules, you could connect
by appending the port number to the server name. The following example alters the
previous one by adding a custom port number, 1500, to the server name:

text

Server=sqlvmlabel.eastus.cloudapp.azure.com,1500;Integrated
Security=false;User ID=<login_name>;Password=<your_password>"

7 Note

When you query SQL Server on VM over the internet, all outgoing data from the
Azure datacenter is subject to normal pricing on outbound data transfers .

Connect to SQL Server within a virtual network


When you choose Private for the SQL connectivity type in the portal, Azure configures
most of the settings identical to Public. The one difference is that there is no network
security group rule to allow outside traffic on the SQL Server port (default 1433).

) Important

The virtual machine images for the SQL Server Developer and Express editions do
not automatically enable the TCP/IP protocol. For Developer and Express editions,
you must use SQL Server Configuration Manager to manually enable the TCP/IP
protocol after creating the VM.

Private connectivity is often used in conjunction with a virtual network, which enables
several scenarios. You can connect VMs in the same virtual network, even if those VMs
exist in different resource groups. And with a site-to-site VPN, you can create a hybrid
architecture that connects VMs with on-premises networks and machines.

Virtual networks also enable you to join your Azure VMs to a domain. This is the only
way to use Windows authentication to SQL Server. The other connection scenarios
require SQL authentication with user names and passwords.
Assuming that you have configured DNS in your virtual network, you can connect to
your SQL Server instance by specifying the SQL Server VM computer name in the
connection string. The following example also assumes that Windows authentication has
been configured and that the user has been granted access to the SQL Server instance.

text

Server=mysqlvm;Integrated Security=true

Enable TCP/IP for Developer and Express


editions
When changing SQL Server connectivity settings, Azure does not automatically enable
the TCP/IP protocol for SQL Server Developer and Express editions. The steps below
explain how to manually enable TCP/IP so that you can connect remotely by IP address.

First, connect to the SQL Server virtual machine with remote desktop.

1. After the Azure virtual machine is created and running, select Virtual machine, and
then choose your new VM.

2. Select Connect and then choose RDP from the drop-down to download your RDP
file.

3. Open the RDP file that your browser downloads for the VM.

4. The Remote Desktop Connection notifies you that the publisher of this remote
connection cannot be identified. Click Connect to continue.
5. In the Windows Security dialog, click Use a different account. You might have to
click More choices to see this. Specify the user name and password that you
configured when you created the VM. You must add a backslash before the user
name.

6. Click OK to connect.

Next, enable the TCP/IP protocol with SQL Server Configuration Manager.

1. While connected to the virtual machine with remote desktop, search for
Configuration Manager:
2. In SQL Server Configuration Manager, in the console pane, expand SQL Server
Network Configuration.

3. In the console pane, click Protocols for MSSQLSERVER (the default instance
name.) In the details pane, right-click TCP and click Enable if it is not already
enabled.

4. In the console pane, click SQL Server Services. In the details pane, right-click SQL
Server (instance name) (the default instance is SQL Server (MSSQLSERVER)), and
then click Restart, to stop and restart the instance of SQL Server.
5. Close SQL Server Configuration Manager.

For more information about enabling protocols for the SQL Server Database Engine, see
Enable or Disable a Server Network Protocol.

Connect with SSMS


The following steps show how to create an optional DNS label for your Azure VM and
then connect with SQL Server Management Studio (SSMS).

Configure a DNS Label for the public IP address


To connect to the SQL Server Database Engine from the Internet, consider creating a
DNS Label for your public IP address. You can connect by IP address, but the DNS Label
creates an A Record that is easier to identify and abstracts the underlying public IP
address.

7 Note

DNS Labels are not required if you plan to only connect to the SQL Server instance
within the same Virtual Network or only locally.

To create a DNS Label, first select Virtual machines in the portal. Select your SQL Server
VM to bring up its properties.

1. In the virtual machine overview, select your Public IP address.


2. In the properties for your Public IP address, expand Configuration.

3. Enter a DNS Label name. This name is an A Record that can be used to connect to
your SQL Server VM by name instead of by IP Address directly.

4. Select the Save button.

Connect to the Database Engine from another computer


1. On a computer connected to the internet, open SQL Server Management Studio
(SSMS). If you do not have SQL Server Management Studio, you can download it
here.

2. In the Connect to Server or Connect to Database Engine dialog box, edit the
Server name value. Enter the IP address or full DNS name of the virtual machine
(determined in the previous task). You can also add a comma and provide SQL
Server's TCP port. For example, tutorial-sqlvm1.westus2.cloudapp.azure.com,1433 .
3. In the Authentication box, select SQL Server Authentication.

4. In the Login box, type the name of a valid SQL login.

5. In the Password box, type the password of the login.

6. Select Connect.

Manual configuration and troubleshooting


Although the portal provides options to automatically configure connectivity, it is useful
to know how to manually configure connectivity. Understanding the requirements can
also aid troubleshooting.

The following table lists the requirements to connect to SQL Server on Azure VM.

Requirement Description

Enable SQL SQL Server authentication is needed to connect to the VM remotely unless you
Server have configured Active Directory on a virtual network.
authentication
mode

Create a SQL If you are using SQL authentication, you need a SQL login with a user name and
login password that also has permissions to your target database.

Enable TCP/IP SQL Server must allow connections over TCP.


protocol

Enable firewall The firewall on the VM must allow inbound traffic on the SQL Server port
rule for the SQL (default 1433).
Server port
Requirement Description

Create a You must allow the VM to receive traffic on the SQL Server port (default 1433) if
network you want to connect over the internet. Local and virtual-network-only
security group connections do not require this. This is the only step required in the Azure
rule for TCP portal.
1433

 Tip

The steps in the preceding table are done for you when you configure connectivity
in the portal. Use these steps only to confirm your configuration or to set up
connectivity manually for SQL Server.

Connect to a SQL Server on Azure VM using


Azure AD
Enable Azure Active Directory (Azure AD) for your SQL Server on Azure Virtual Machines
via the Azure portal. SQL Server with Azure Active Directory is supported only on SQL
Server 2022 (16.x) and later versions.

Next steps
To see provisioning instructions along with these connectivity steps, see Provisioning a
SQL Server virtual machine on Azure.

For other topics related to running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines.
Provision SQL Server on Azure VM
(Azure portal)
Article • 03/27/2023

Applies to:
SQL Server on Azure VM

This article provides a detailed description of the available configuration options when
deploying your SQL Server on Azure Virtual Machines (VMs) by using the Azure portal.
For a quick guide, see the SQL Server VM quickstart instead.

Prerequisites
An Azure subscription. Create a free account to get started.

Choose Marketplace image


Use the Azure Marketplace to choose one of several pre-configured images from the
virtual machine gallery.

The Developer edition is used in this article because it is a full-featured, free edition of
SQL Server for development testing. You pay only for the cost of running the VM.
However, you are free to choose any of the images to use in this walkthrough. For a
description of available images, see the SQL Server Windows Virtual Machines overview.

Licensing costs for SQL Server are incorporated into the per-second pricing of the VM
you create and varies by edition and cores. However, SQL Server Developer edition is
free for development and testing, not production. Also, SQL Express is free for
lightweight workloads (less than 1 GB of memory, less than 10 GB of storage). You can
also bring-your-own-license (BYOL) and pay only for the VM. Those image names are
prefixed with {BYOL}. For more information on these options, see Pricing guidance for
SQL Server Azure VMs.

To choose an image, follow these steps:

1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in
the list, select All services, then type Azure SQL in the search box. You can select
the star next to Azure SQL to save it as a favorite to pin it to the left-hand
navigation.
2. Select + Create to open the Select SQL deployment option page. Select the
Image drop-down and then type 2019 in the SQL Server image search box. Choose
a SQL Server image, such as Free SQL Server License: SQL 2019 on Windows
Server 2019 from the drop-down. Choose Show details for additional information
about the image.

3. Select Create.

Basic settings
The Basics tab allows you to select the subscription, resource group, and instance
details.

Using a new resource group is helpful if you are just testing or learning about SQL
Server deployments in Azure. After you finish with your test, delete the resource group
to automatically delete the VM and all resources associated with that resource group.
For more information about resource groups, see Azure Resource Manager Overview.

On the Basics tab, provide the following information:

Under Project Details, make sure the correct subscription is selected.


In the Resource group section, either select an existing resource group from the
list or choose Create new to create a new resource group. A resource group is a
collection of related resources in Azure (virtual machines, storage accounts, virtual
networks, etc.).
Under Instance details:

1. Enter a unique Virtual machine name.


2. Choose a location for your Region.
3. For the purpose of this guide, leave Availability options set to No
infrastructure redundancy required. To find out more information about
availability options, see Availability.
4. In the Image list, select Free SQL Server License: SQL Server 2019 Developer on
Windows Server 2019 if it's not already selected.
5. Choose Standard for Security type.
6. Select See all sizes for the Size of the virtual machine and search for the
E4ds_v5 offering. This is one of the minimum recommended VM sizes for SQL
Server on Azure VMs. If this is for testing purposes, be sure to clean up your
resources once you're done with them to prevent any unexpected charges.
For production workloads, see the recommended machine sizes and
configuration in Performance best practices for SQL Server in Azure Virtual
Machines.
) Important

The estimated monthly cost displayed on the Choose a size window does not
include SQL Server licensing costs. This estimate is the cost of the VM alone. For the
Express and Developer editions of SQL Server, this estimate is the total estimated
cost. For other editions, see the Windows Virtual Machines pricing page and
select your target edition of SQL Server. Also see the Pricing guidance for SQL
Server Azure VMs and Sizes for virtual machines.

Under Administrator account, provide a username and password. The password


must be at least 12 characters long and meet the defined complexity requirements.

Under Inbound port rules, choose Allow selected ports and then select RDP
(3389) from the drop-down.

You also have the option to enable the Azure Hybrid Benefit to use your own SQL Server
license and save on licensing cost.

Disks
On the Disks tab, configure your disk options.

Under OS disk type, select the type of disk you want for your OS from the drop-
down. Premium is recommended for production systems but is not available for a
Basic VM. To use a Premium SSD, change the virtual machine size.
Under Advanced, select Yes under use Managed Disks.

Microsoft recommends Managed Disks for SQL Server. Managed Disks handles storage
behind the scenes. In addition, when virtual machines with Managed Disks are in the
same availability set, Azure distributes the storage resources to provide appropriate
redundancy. For more information, see Azure Managed Disks Overview. For specifics
about managed disks in an availability set, see Use managed disks for VMs in availability
set.

Networking
On the Networking tab, configure your networking options.

Create a new virtual network or use an existing virtual network for your SQL Server
VM. Designate a Subnet as well.

Under NIC network security group, select either a basic security group or the
advanced security group. Choosing the basic option allows you to select inbound
ports for the SQL Server VM which are the same values configured on the Basic
tab. Selecting the advanced option allows you to choose an existing network
security group, or create a new one.

You can make other changes to network settings, or keep the default values.

Management
On the Management tab, configure monitoring and auto-shutdown.

Azure enables Boot diagnostics by default with the same storage account
designated for the VM. On this tab, you can change these settings and enable OS
guest diagnostics.
You can also enable System assigned managed identity and auto-shutdown on
this tab.

SQL Server settings


On the SQL Server settings tab, configure specific settings and optimizations for SQL
Server. You can configure the following settings for SQL Server:

Connectivity
Authentication
Azure Key Vault integration
Storage configuration
SQL instance settings
Automated patching
Automated backup
Machine Learning Services

Connectivity
Under SQL connectivity, specify the type of access you want to the SQL Server instance
on this VM. For the purposes of this walkthrough, select Public (internet) to allow
connections to SQL Server from machines or services on the internet. With this option
selected, Azure automatically configures the firewall and the network security group to
allow traffic on the port selected.

 Tip

By default, SQL Server listens on a well-known port, 1433. For increased security,
change the port in the previous dialog to listen on a non-default port, such as
1401. If you change the port, you must connect using that port from any client
tools, such as SQL Server Management Studio (SSMS).

To connect to SQL Server via the internet, you also must enable SQL Server
Authentication, which is described in the next section.

If you would prefer to not enable connections to the Database Engine via the internet,
choose one of the following options:

Local (inside VM only) to allow connections to SQL Server only from within the
VM.
Private (within Virtual Network) to allow connections to SQL Server from
machines or services in the same virtual network.

In general, improve security by choosing the most restrictive connectivity that your
scenario allows. But all the options are securable through network security group (NSG)
rules and SQL/Windows Authentication. You can edit the NSG after the VM is created.
For more information, see Security Considerations for SQL Server in Azure Virtual
Machines.
Authentication
If you require SQL Server Authentication, select Enable under SQL Authentication on
the SQL Server settings tab.

7 Note

If you plan to access SQL Server over the internet (the Public connectivity option),
you must enable SQL Authentication here. Public access to the SQL Server requires
SQL Authentication.

If you enable SQL Server Authentication, specify a Login name and Password. This login
name is configured as a SQL Server Authentication login and a member of the sysadmin
fixed server role. For more information about Authentication Modes, see Choose an
Authentication Mode.

If you prefer not to enable SQL Server Authentication, you can use the local
Administrator account on the VM to connect to the SQL Server instance.

Azure Key Vault integration


To store security secrets in Azure for encryption, select SQL Server settings, and scroll
down to Azure key vault integration. Select Enable and fill in the requested information.

The following table lists the parameters required to configure Azure Key Vault (AKV)
Integration.
PARAMETER DESCRIPTION EXAMPLE

Key Vault The location of the key vault. https://contosokeyvault.vault.azure.net/


URL

Principal Azure Active Directory service fde2b411-33d5-4e11-af04eb07b669ccf2


name principal name. This name is
also referred to as the Client ID.

Principal Azure Active Directory service 9VTJSQwzlFepD8XODnzy8n2V01Jd8dAjwm/azF1XDKM=


secret principal secret. This secret is
also referred to as the Client
Secret.

Credential Credential name: AKV mycred1


name Integration creates a credential
within SQL Server and allows
the VM to access the key vault.
Choose a name for this
credential.

For more information, see Configure Azure Key Vault Integration for SQL Server on
Azure VMs.

Storage configuration
On the SQL Server settings tab, under Storage configuration, select Change
configuration to open the Configure storage page and specify storage requirements.
You can choose to leave the values at default, or you can manually change the storage
topology to suit your IOPS needs. For more information, see storage configuration.
Under Data storage, choose the location for your data drive, the disk type, and the
number of disks. You can also select the checkbox to store your system databases on
your data drive instead of the local C:\ drive.

Under Log storage, you can choose to use the same drive as the data drive for your
transaction log files, or you can choose to use a separate drive from the drop-down. You
can also choose the name of the drive, the disk type, and the number of disks.
Configure your tempdb database settings under Tempdb storage, such as the location of
the database files, as well as the number of files, initial size, and autogrowth size in MB.
Currently, the max number of tempdb files. Currently, during deployment, the max
number of tempdb files is 8, but more files can be added after the SQL Server VM is
deployed.

Select OK to save your storage configuration settings.

SQL instance settings


Select Change SQL instance settings to modify SQL Server configuration options, such
as the server collation, max degree of parallelism (MAXDOP), SQL Server min and max
memory limits, and whether you want to optimize for ad hoc workloads.
.

SQL Server license


If you're a Software Assurance customer, you can use the Azure Hybrid Benefit to
bring your own SQL Server license and save on resources. Select Yes to enable the Azure
Hybrid Benefit, and then confirm that you have Software Assurance by selecting the
checkbox.

If you chose a free license image, such as the developer edition, the SQL Server license
option is grayed out.

Automated patching
Automated patching is enabled by default. Automated patching allows Azure to
automatically apply SQL Server and operating system security updates. Specify a day of
the week, time, and duration for a maintenance window. Azure performs patching in this
maintenance window. The maintenance window schedule uses the VM locale. If you do
not want Azure to automatically patch SQL Server and the operating system, select
Disable.
For more information, see Automated Patching for SQL Server in Azure Virtual Machines.

Automated backup
Enable automatic database backups for all databases under Automated backup.
Automated backup is disabled by default.

When you enable SQL automated backup, you can configure the following settings:

Retention period for backups (up to 90 days)


Storage account, and storage container, to use for backups
Encryption option and password for backups
Backup system databases
Configure backup schedule

To encrypt the backup, select Enable. Then specify the Password. Azure creates a
certificate to encrypt the backups and uses the specified password to protect that
certificate.

Choose Select Storage Container to specify the container where you want to store your
backups.

By default the schedule is set automatically, but you can create your own schedule by
selecting Manual, which allows you to configure the backup frequency, backup time
window, and the log backup frequency in minutes.
For more information, see Automated Backup for SQL Server in Azure Virtual Machines.

Machine Learning Services


You have the option to enable Machine Learning Services. This option lets you use
machine learning with Python and R in SQL Server 2017. Select Enable on the SQL
Server Settings window. Enabling this feature from the Azure portal after the SQL Server
VM is deployed will trigger a restart of the SQL Server service.

Review + create
On the Review + create tab:

1. Review the summary.


2. Select Create to create the SQL Server, resource group, and resources specified for
this VM.

You can monitor the deployment from the Azure portal. The Notifications button at the
top of the screen shows basic status of the deployment.

7 Note

An example of time for Azure to deploy a SQL Server VM: A test SQL Server VM
provisioned to the East US region with default settings takes approximately 12
minutes to complete. You might experience faster or slower deployment times
based on your region and selected settings.
Open the VM with Remote Desktop
Use the following steps to connect to the SQL Server virtual machine with Remote
Desktop Protocol (RDP):

1. After the Azure virtual machine is created and running, select Virtual machine, and
then choose your new VM.

2. Select Connect and then choose RDP from the drop-down to download your RDP
file.

3. Open the RDP file that your browser downloads for the VM.

4. The Remote Desktop Connection notifies you that the publisher of this remote
connection cannot be identified. Click Connect to continue.

5. In the Windows Security dialog, click Use a different account. You might have to
click More choices to see this. Specify the user name and password that you
configured when you created the VM. You must add a backslash before the user
name.
6. Click OK to connect.

After you connect to the SQL Server virtual machine, you can launch SQL Server
Management Studio and connect with Windows Authentication using your local
administrator credentials. If you enabled SQL Server Authentication, you can also
connect with SQL Authentication using the SQL login and password you configured
during provisioning.

Access to the machine enables you to directly change machine and SQL Server settings
based on your requirements. For example, you could configure the firewall settings or
change SQL Server configuration settings.

Connect to SQL Server remotely


In this walkthrough, you selected Public access for the virtual machine and SQL Server
Authentication. These settings automatically configured the virtual machine to allow
SQL Server connections from any client over the internet (assuming they have the
correct SQL login).

7 Note

If you did not select Public during provisioning, then you can change your SQL
connectivity settings through the portal after provisioning. For more information,
see Change your SQL connectivity settings.
The following sections show how to connect over the internet to your SQL Server VM
instance.

Configure a DNS Label for the public IP address


To connect to the SQL Server Database Engine from the Internet, consider creating a
DNS Label for your public IP address. You can connect by IP address, but the DNS Label
creates an A Record that is easier to identify and abstracts the underlying public IP
address.

7 Note

DNS Labels are not required if you plan to only connect to the SQL Server instance
within the same Virtual Network or only locally.

To create a DNS Label, first select Virtual machines in the portal. Select your SQL Server
VM to bring up its properties.

1. In the virtual machine overview, select your Public IP address.

2. In the properties for your Public IP address, expand Configuration.

3. Enter a DNS Label name. This name is an A Record that can be used to connect to
your SQL Server VM by name instead of by IP Address directly.

4. Select the Save button.


Connect to the Database Engine from another computer
1. On a computer connected to the internet, open SQL Server Management Studio
(SSMS). If you do not have SQL Server Management Studio, you can download it
here.

2. In the Connect to Server or Connect to Database Engine dialog box, edit the
Server name value. Enter the IP address or full DNS name of the virtual machine
(determined in the previous task). You can also add a comma and provide SQL
Server's TCP port. For example, tutorial-sqlvm1.westus2.cloudapp.azure.com,1433 .

3. In the Authentication box, select SQL Server Authentication.

4. In the Login box, type the name of a valid SQL login.

5. In the Password box, type the password of the login.

6. Select Connect.
7 Note

This example uses the common port 1433. However, this value will need to be
modified if a different port (such as 1401) was specified during the deployment of
the SQL Server VM.

Known Issues

I am unable to change the SQL Binary files installation


path
SQL Server images from Azure Marketplace install the SQL Server binaries to the C drive.
It is not currently possible to change this during deployment. The only available
workaround is to manually uninstall SQL Server from within the VM, then reinstall SQL
Server and choose a different location for the binary files during the installation process.

Next steps
For other information about using SQL Server in Azure, see SQL Server on Azure Virtual
Machines and the Frequently Asked Questions.
How to use Azure PowerShell to
provision SQL Server on Azure Virtual
Machines
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

This guide covers options for using PowerShell to provision SQL Server on Azure Virtual
Machines (VMs). For a streamlined Azure PowerShell example that relies on default
values, see the SQL VM Azure PowerShell quickstart.

If you don't have an Azure subscription, create a free account before you begin.

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

Configure your subscription


1. Open PowerShell and establish access to your Azure account by running the
Connect-AzAccount command.

PowerShell

Connect-AzAccount

2. When prompted, enter your credentials. Use the same email and password that
you use to sign in to the Azure portal.

Define image variables


To reuse values and simplify script creation, start by defining a number of variables.
Change the parameter values as you want, but be aware of naming restrictions related
to name lengths and special characters when modifying the values provided.
Location and resource group
Define the data region and the resource group where you want to create the other VM
resources.

Modify as you want and then run these cmdlets to initialize these variables.

PowerShell

$Location = "SouthCentralUS"

$ResourceGroupName = "sqlvm2"

Storage properties
Define the storage account and the type of storage to be used by the virtual machine.

Modify as you want, and then run the following cmdlet to initialize these variables. We
recommend using premium SSDs for production workloads.

PowerShell

$StorageName = $ResourceGroupName + "storage"

$StorageSku = "Premium_LRS"

Network properties
Define the properties to be used by the network in the virtual machine.

Network interface
TCP/IP allocation method
Virtual network name
Virtual subnet name
Range of IP addresses for the virtual network
Range of IP addresses for the subnet
Public domain name label

Modify as you want and then run this cmdlet to initialize these variables.

PowerShell

$InterfaceName = $ResourceGroupName + "ServerInterface"

$NsgName = $ResourceGroupName + "nsg"

$TCPIPAllocationMethod = "Dynamic"

$VNetName = $ResourceGroupName + "VNet"

$SubnetName = "Default"
$VNetAddressPrefix = "10.0.0.0/16"

$VNetSubnetAddressPrefix = "10.0.0.0/24"

$DomainName = $ResourceGroupName

Virtual machine properties


Define the following properties:

Virtual machine name


Computer name
Virtual machine size
Operating system disk name for the virtual machine

Modify as you want and then run this cmdlet to initialize these variables.

PowerShell

$VMName = $ResourceGroupName + "VM"

$ComputerName = $ResourceGroupName + "Server"

$VMSize = "Standard_DS13"

$OSDiskName = $VMName + "OSDisk"

Choose a SQL Server image


Use the following variables to define the SQL Server image to use for the virtual
machine.

1. First, list all of the SQL Server image offerings with the Get-AzVMImageOffer
command. This command lists the current images that are available in the Azure
portal and also older images that can only be installed with PowerShell:

PowerShell

Get-AzVMImageOffer -Location $Location -Publisher 'MicrosoftSQLServer'

2. For this tutorial, use the following variables to specify SQL Server 2017 on
Windows Server 2016.

PowerShell

$OfferName = "SQL2017-WS2016"

$PublisherName = "MicrosoftSQLServer"

$Version = "latest"

3. Next, list the available editions for your offer.

PowerShell

Get-AzVMImageSku -Location $Location -Publisher 'MicrosoftSQLServer' -


Offer $OfferName | Select Skus

4. For this tutorial, use the SQL Server 2017 Developer edition (SQLDEV). The
Developer edition is freely licensed for testing and development, and you only pay
for the cost of running the VM.

PowerShell

$Sku = "SQLDEV"

Create a resource group


With the Resource Manager deployment model, the first object that you create is the
resource group. Use the New-AzResourceGroup cmdlet to create an Azure resource
group and its resources. Specify the variables that you previously initialized for the
resource group name and location.

Run this cmdlet to create your new resource group.

PowerShell

New-AzResourceGroup -Name $ResourceGroupName -Location $Location

Create a storage account


The virtual machine requires storage resources for the operating system disk and for the
SQL Server data and log files. For simplicity, you'll create a single disk for both. You can
attach additional disks later using the Add-Azure Disk cmdlet to place your SQL Server
data and log files on dedicated disks. Use the New-AzStorageAccount cmdlet to create a
standard storage account in your new resource group. Specify the variables that you
previously initialized for the storage account name, storage SKU name, and location.

Run this cmdlet to create your new storage account.

PowerShell
$StorageAccount = New-AzStorageAccount -ResourceGroupName $ResourceGroupName
`

-Name $StorageName -SkuName $StorageSku `

-Kind "Storage" -Location $Location

 Tip

Creating the storage account can take a few minutes.

Create network resources


The virtual machine requires a number of network resources for network connectivity.

Each virtual machine requires a virtual network.


A virtual network must have at least one subnet defined.
A network interface must be defined with either a public or a private IP address.

Create a virtual network subnet configuration


Start by creating a subnet configuration for your virtual network. For this tutorial, create
a default subnet using the New-AzVirtualNetworkSubnetConfig cmdlet. Specify the
variables that you previously initialized for the subnet name and address prefix.

7 Note

You can define additional properties of the virtual network subnet configuration
using this cmdlet, but that is beyond the scope of this tutorial.

Run this cmdlet to create your virtual subnet configuration.

PowerShell

$SubnetConfig = New-AzVirtualNetworkSubnetConfig -Name $SubnetName -


AddressPrefix $VNetSubnetAddressPrefix

Create a virtual network


Next, create your virtual network in your new resource group using the New-
AzVirtualNetwork cmdlet. Specify the variables that you previously initialized for the
name, location, and address prefix. Use the subnet configuration that you defined in the
previous step.

Run this cmdlet to create your virtual network.

PowerShell

$VNet = New-AzVirtualNetwork -Name $VNetName `

-ResourceGroupName $ResourceGroupName -Location $Location `

-AddressPrefix $VNetAddressPrefix -Subnet $SubnetConfig

Create the public IP address


Now that your virtual network is defined, you must configure an IP address for
connectivity to the virtual machine. For this tutorial, create a public IP address using
dynamic IP addressing to support Internet connectivity. Use the New-AzPublicIpAddress
cmdlet to create the public IP address in your new resource group. Specify the variables
that you previously initialized for the name, location, allocation method, and DNS
domain name label.

7 Note

You can define additional properties of the public IP address using this cmdlet, but
that is beyond the scope of this initial tutorial. You could also create a private
address or an address with a static address, but that is also beyond the scope of
this tutorial.

Run this cmdlet to create your public IP address.

PowerShell

$PublicIp = New-AzPublicIpAddress -Name $InterfaceName `

-ResourceGroupName $ResourceGroupName -Location $Location `

-AllocationMethod $TCPIPAllocationMethod -DomainNameLabel $DomainName

Create the network security group


To secure the VM and SQL Server traffic, create a network security group.

1. First, create a network security group rule for remote desktop (RDP) to allow RDP
connections.
PowerShell

$NsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name "RDPRule" -Protocol


Tcp `

-Direction Inbound -Priority 1000 -SourceAddressPrefix * -


SourcePortRange * `

-DestinationAddressPrefix * -DestinationPortRange 3389 -Access Allow

2. Configure a network security group rule that allows traffic on TCP port 1433. Doing
so enables connections to SQL Server over the internet.

PowerShell

$NsgRuleSQL = New-AzNetworkSecurityRuleConfig -Name "MSSQLRule" -


Protocol Tcp `

-Direction Inbound -Priority 1001 -SourceAddressPrefix * -


SourcePortRange * `

-DestinationAddressPrefix * -DestinationPortRange 1433 -Access Allow

3. Create the network security group.

PowerShell

$Nsg = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroupName


`

-Location $Location -Name $NsgName `

-SecurityRules $NsgRuleRDP,$NsgRuleSQL

Create the network interface


Now you're ready to create the network interface for your virtual machine. Use the New-
AzNetworkInterface cmdlet to create the network interface in your new resource group.
Specify the name, location, subnet, and public IP address previously defined.

Run this cmdlet to create your network interface.

PowerShell

$Interface = New-AzNetworkInterface -Name $InterfaceName `

-ResourceGroupName $ResourceGroupName -Location $Location `

-SubnetId $VNet.Subnets[0].Id -PublicIpAddressId $PublicIp.Id `

-NetworkSecurityGroupId $Nsg.Id

Configure a VM object
Now that storage and network resources are defined, you're ready to define compute
resources for the virtual machine.

Specify the virtual machine size and various operating system properties.
Specify the network interface that you previously created.
Define blob storage.
Specify the operating system disk.

Create the VM object


Start by specifying the virtual machine size. For this tutorial, specify a DS13. Use the
New-AzVMConfig cmdlet to create a configurable virtual machine object. Specify the
variables that you previously initialized for the name and size.

Run this cmdlet to create the virtual machine object.

PowerShell

$VirtualMachine = New-AzVMConfig -VMName $VMName -VMSize $VMSize

Create a credential object to hold the name and password


for the local administrator credentials
Before you can set the operating system properties for the virtual machine, you must
supply the credentials for the local administrator account as a secure string. To
accomplish this, use the Get-Credential cmdlet.

Run the following cmdlet. You'll need to type the VM's local administrator name and
password into the PowerShell credential request window.

PowerShell

$Credential = Get-Credential -Message "Type the name and password of the


local administrator account."

Set the operating system properties for the virtual


machine
Now you're ready to set the virtual machine's operating system properties with the Set-
AzVMOperatingSystem cmdlet.

Set the type of operating system as Windows.


Require the virtual machine agent to be installed.
Specify that the cmdlet enables auto update.
Specify the variables that you previously initialized for the virtual machine name,
the computer name, and the credential.

Run this cmdlet to set the operating system properties for your virtual machine.

PowerShell

$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine `

-Windows -ComputerName $ComputerName -Credential $Credential `

-ProvisionVMAgent -EnableAutoUpdate

Add the network interface to the virtual machine


Next, use the Add-AzVMNetworkInterface cmdlet to add the network interface using the
variable that you defined earlier.

Run this cmdlet to set the network interface for your virtual machine.

PowerShell

$VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id


$Interface.Id

Set the blob storage location for the disk to be used by


the virtual machine
Next, set the blob storage location for the VM's disk with the variables that you defined
earlier.

Run this cmdlet to set the blob storage location.

PowerShell

$OSDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + "vhds/" +


$OSDiskName + ".vhd"

Set the operating system disk properties for the virtual


machine
Next, set the operating system disk properties for the virtual machine using the Set-
AzVMOSDisk cmdlet.

Specify that the operating system for the virtual machine will come from an image.
Set caching to read only (because SQL Server is being installed on the same disk).
Specify the variables that you previously initialized for the VM name and the
operating system disk.

Run this cmdlet to set the operating system disk properties for your virtual machine.

PowerShell

$VirtualMachine = Set-AzVMOSDisk -VM $VirtualMachine -Name `

$OSDiskName -VhdUri $OSDiskUri -Caching ReadOnly -CreateOption FromImage

Specify the platform image for the virtual machine


The last configuration step is to specify the platform image for your virtual machine. For
this tutorial, use the latest SQL Server 2016 CTP image. Use the Set-AzVMSourceImage
cmdlet to use this image with the variables that you defined earlier.

Run this cmdlet to specify the platform image for your virtual machine.

PowerShell

$VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine `

-PublisherName $PublisherName -Offer $OfferName `

-Skus $Sku -Version $Version

Create the SQL VM


Now that you've finished the configuration steps, you're ready to create the virtual
machine. Use the New-AzVM cmdlet to create the virtual machine using the variables
that you defined.

 Tip

Creating the VM can take a few minutes.

Run this cmdlet to create your virtual machine.

PowerShell
New-AzVM -ResourceGroupName $ResourceGroupName -Location $Location -VM
$VirtualMachine

The virtual machine is created.

7 Note

If you get an error about boot diagnostics, you can ignore it. A standard storage
account is created for boot diagnostics because the specified storage account for
the virtual machine's disk is a premium storage account.

Install the SQL IaaS Agent extension


SQL Server virtual machines support automated management features with the SQL
Server IaaS Agent Extension. To register your SQL Server with the extension run the
New-AzSqlVM command after the virtual machine is created. Specify the license type for
your SQL Server VM, choosing between either pay-as-you-go or bring-your-own-license
via the Azure Hybrid Benefit . For more information about licensing, see licensing
model.

PowerShell

New-AzSqlVM -ResourceGroupName $ResourceGroupName -Name $VMName -Location


$Location -LicenseType <PAYG/AHUB>

There are three ways to register with the extension:

Automatically for all current and future VMs in a subscription


Manually for a single VM
Manually for multiple VMs in bulk

Stop or remove a VM
If you don't need the VM to run continuously, you can avoid unnecessary charges by
stopping it when not in use. The following command stops the VM but leaves it
available for future use.

PowerShell

Stop-AzVM -Name $VMName -ResourceGroupName $ResourceGroupName

You can also permanently delete all resources associated with the virtual machine with
the Remove-AzResourceGroup command. Doing so permanently deletes the virtual
machine as well, so use this command with care.

Example script
The following script contains the complete PowerShell script for this tutorial. It assumes
that you have already set up the Azure subscription to use with the Connect-AzAccount
and Select-AzSubscription commands.

PowerShell

# Variables

## Global

$Location = "SouthCentralUS"

$ResourceGroupName = "sqlvm2"

## Storage

$StorageName = $ResourceGroupName + "storage"

$StorageSku = "Premium_LRS"

## Network

$InterfaceName = $ResourceGroupName + "ServerInterface"

$NsgName = $ResourceGroupName + "nsg"

$VNetName = $ResourceGroupName + "VNet"

$SubnetName = "Default"
$VNetAddressPrefix = "10.0.0.0/16"

$VNetSubnetAddressPrefix = "10.0.0.0/24"

$TCPIPAllocationMethod = "Dynamic"

$DomainName = $ResourceGroupName

##Compute

$VMName = $ResourceGroupName + "VM"

$ComputerName = $ResourceGroupName + "Server"

$VMSize = "Standard_DS13"

$OSDiskName = $VMName + "OSDisk"

##Image

$PublisherName = "MicrosoftSQLServer"

$OfferName = "SQL2017-WS2016"

$Sku = "SQLDEV"

$Version = "latest"

# Resource Group

New-AzResourceGroup -Name $ResourceGroupName -Location $Location

# Storage

$StorageAccount = New-AzStorageAccount -ResourceGroupName $ResourceGroupName


-Name $StorageName -SkuName $StorageSku -Kind "Storage" -Location $Location

# Network

$SubnetConfig = New-AzVirtualNetworkSubnetConfig -Name $SubnetName -


AddressPrefix $VNetSubnetAddressPrefix

$VNet = New-AzVirtualNetwork -Name $VNetName -ResourceGroupName


$ResourceGroupName -Location $Location -AddressPrefix $VNetAddressPrefix -
Subnet $SubnetConfig

$PublicIp = New-AzPublicIpAddress -Name $InterfaceName -ResourceGroupName


$ResourceGroupName -Location $Location -AllocationMethod
$TCPIPAllocationMethod -DomainNameLabel $DomainName

$NsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name "RDPRule" -Protocol Tcp


-Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange *
-DestinationAddressPrefix * -DestinationPortRange 3389 -Access Allow

$NsgRuleSQL = New-AzNetworkSecurityRuleConfig -Name "MSSQLRule" -Protocol


Tcp -Direction Inbound -Priority 1001 -SourceAddressPrefix * -
SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange 1433 -
Access Allow

$Nsg = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroupName -


Location $Location -Name $NsgName -SecurityRules $NsgRuleRDP,$NsgRuleSQL

$Interface = New-AzNetworkInterface -Name $InterfaceName -ResourceGroupName


$ResourceGroupName -Location $Location -SubnetId $VNet.Subnets[0].Id -
PublicIpAddressId $PublicIp.Id -NetworkSecurityGroupId $Nsg.Id

# Compute

$VirtualMachine = New-AzVMConfig -VMName $VMName -VMSize $VMSize

$Credential = Get-Credential -Message "Type the name and password of the


local administrator account."

$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -


ComputerName $ComputerName -Credential $Credential -ProvisionVMAgent -
EnableAutoUpdate #-TimeZone = $TimeZone

$VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id


$Interface.Id

$OSDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + "vhds/" +


$OSDiskName + ".vhd"

$VirtualMachine = Set-AzVMOSDisk -VM $VirtualMachine -Name $OSDiskName -


VhdUri $OSDiskUri -Caching ReadOnly -CreateOption FromImage

# Image

$VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -PublisherName


$PublisherName -Offer $OfferName -Skus $Sku -Version $Version

# Create the VM in Azure

New-AzVM -ResourceGroupName $ResourceGroupName -Location $Location -VM


$VirtualMachine

# Add the SQL IaaS Agent Extension, and choose the license type

New-AzSqlVM -ResourceGroupName $ResourceGroupName -Name $VMName -Location


$Location -LicenseType <PAYG/AHUB>

Next steps
After the virtual machine is created, you can:
Connect to the virtual machine using RDP
Configure SQL Server settings in the portal for your VM, including:
Storage settings
Automated management tasks
Configure connectivity
Connect clients and applications to the new SQL Server instance
Deploy SQL Server to an Azure
confidential VM
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

In this article, learn how to deploy SQL Server to an Azure confidential VM.

Overview
Azure confidential VMs provide a strong, hardware-enforced boundary that hardens the
protection of the guest OS against host operator access. Choosing a confidential VM
size for your SQL Server on Azure VM provides an extra layer of protection, enabling you
to confidently store your sensitive data in the cloud and meet strict compliance
requirements.

Azure confidential VMs leverage AMD processors with SEV-SNP technology that encrypt
the memory of the VM using keys generated by the processor. This helps protect data
while it's in use (the data that is processed inside the memory of the SQL Server process)
from unauthorized access from the host OS. The OS disk of a confidential VM can also
be encrypted with keys bound to the Trusted Platform Module (TPM) chip of the virtual
machine, reinforcing protection for data-at-rest.

Azure confidential VMs are available in both the general purpose and memory
optimized VM size series.

Recommendations for disk encryption are different for confidential VMs than for the
other VM sizes. See disk encryption to learn more.

Deploy SQL Server to a confidential VM


For detailed steps to deploy a confidential VM, review the Quickstart: Deploy a SQL
Server on Azure VM.

To deploy a SQL Server VM to a confidential Azure VM, select the following values when
deploying a SQL Server VM:

1. Choose a supported region. To validate region supportability, look for the ECadsv5-
series or DCadsv5-series in VM products Available by Azure region .
2. Set the Security type to Confidential virtual machines. If this option is grayed out,
it's likely the chosen region doesn't currently support confidential VMs. Choose a
different region from the drop-down.
3. Choose a supported confidential SQL Server image. To change the SQL Server
image, select See all images and then filter by Security type = Confidential VMs
to identify all SQL Server images that support confidential VMs.
4. Choose a supported VM size. To see all available sizes, select See all sizes to
identify all the VM sizes that support confidential VMs, as well as the sizes that
don't.
5. (Optional) Configure confidential disk encryption. Follow the steps in the Disk
section of the Quickstart.

Identify available images


To view all SQL Server images that support confidential VMs, begin to deploy a SQL
Server virtual machine from the Azure portal , and then select See all images under
Images on the Basics tab to open the Azure Marketplace. Type sql in the search box,
and then filter the options by choosing Security type = Confidential to view all SQL
Server images that support confidential VMs.

Limitations
Currently, only the following list of bre-built SQL Server images support Azure
confidential VMs. If you wish to use a different combination of SQL Server
version/edition/operating system with Confidential VMs, you can deploy an image
of your choice and then self-install SQL Server.
SQL Server 2022 Enterprise / Developer / Standard / Web on Windows Server
2022 - x64 Gen 2

SQL Server 2019 Enterprise on Windows Server 2022 Database Engine Only -

x64 Gen 2 .
SQL Server 2017 Enterprise on Windows Server 2019 Database Engine Only -

x64 Gen 2
Confidential VMs aren't currently available in all regions. To validate region
supportability, look for the ECadsv5-series or DCadsv5-series in VM products
Available by Azure region .

Next steps
In this article, you learned to deploy SQL Server to a confidential virtual machine in the
Azure portal. To learn more about how to migrate your data to the new SQL Server, see
the following article.

Migrate a database to a SQL VM


Manage SQL Server VMs by using the
Azure portal
Article • 04/05/2023

Applies to:
SQL Server on Azure VM

In the Azure portal , the SQL virtual machines resource is an independent


management service to manage SQL Server on Azure Virtual Machines (VMs) that have
been registered with the SQL Server IaaS Agent extension. You can use the resource to
view all of your SQL Server VMs simultaneously and modify settings dedicated to SQL
Server:

The SQL virtual machines resource management point is different to the Virtual
machine resource used to manage the VM such as start it, stop it, or restart it.

Prerequisite
The SQL virtual machines resource is only available to SQL Server VMs that have been
registered with the SQL IaaS Agent extension.

Access the resource


To access the SQL virtual machines resource, do the following:

1. Open the Azure portal .

2. Select All Services.

3. Enter SQL virtual machines in the search box.

4. (Optional): Select the star next to SQL virtual machines to add this option to your
Favorites menu.
5. Select SQL virtual machines.

6. The portal lists all SQL Server VMs available within the subscription. Select the one
that you want to manage to open the SQL virtual machines resource. Use the
search box if your SQL Server VM isn't appearing.

Selecting your SQL Server VM opens the SQL virtual machines resource:
 Tip

The SQL virtual machines resource is for dedicated SQL Server settings. Select the
name of the VM in the Virtual machine box to open settings that are specific to the
VM, but not exclusive to SQL Server.

License and edition


Use the Configure page of the SQL virtual machines resource to change your SQL Server
licensing metadata to Pay as you go, Azure Hybrid Benefit, or HA/DR for your free
Azure replica for disaster recovery.

You can also modify the edition of SQL Server from the Configure page as well, such as
Enterprise, Standard, or Developer.
Changing the license and edition metadata in the Azure portal is only supported once
the version and edition of SQL Server has been modified internally to the VM. To learn
more see, change the version and edition of SQL Server on Azure VMs.

Storage
Use the Storage Configuration page of the SQL virtual machines resource to extend
your data, log, and tempdb drives. Review storage configuration to learn more.

For example, you can extend your storage:

It's also possible to modify your tempdb settings using the Storage configuration page,
such as the number of tempdb files, as well as the initial size, and the autogrowth ratio.
Select Configure next to tempdb to open the tempdb Configuration page.

Choose Yes next to Configure tempdb data files to modify your settings, and then
choose Yes next to Manage tempdb database folders on restart to allow Azure to
manage your tempdb configuration and implement your settings the next time your SQL
Server service starts:
Restart your SQL Server service to apply your changes.

Patching
Use the Patching page of the SQL virtual machines resource to enable auto patching of
your VM and automatically install Windows and SQL Server updates marked as
Important. You can also configure a maintenance schedule here, such as running
patching daily, as well as a local start time for maintenance, and a maintenance window.

To learn more, see, Automated patching.

Backups
Use the Backups page of the SQL virtual machines resource to configure your
automated backup settings, such as the retention period, which storage account to use,
encryption, whether or not to back up system databases, and a backup schedule.

To learn more, see, Automated patching.

High availability (Preview)


Once you've configured your availability group by using the Azure portal, use the High
Availability page of the SQL virtual machines resource to monitor the health of your
existing Always On availability group.
SQL best practices assessment
Use the SQL best practices assessment page of the SQL virtual machines resource to
assess the health of your SQL Server VM. Once the feature is enabled, your SQL Server
instances and databases are scanned and recommendations are surfaced to improve
performance (indexes, statistics, trace flags, and so on) and identify missing best
practices configurations.

To learn more, see SQL best practices assessment for SQL Server on Azure VMs.

Security Configuration
Use the Security Configuration page of the SQL virtual machines resource to configure
SQL Server security settings such as Azure Key Vault integration, least privilege mode or
if you're on SQL Server 2022, Azure Active Directory (Azure AD) authentication.
To learn more, see the Security best practices.

7 Note

The ability to change the connectivity and SQL Server authentication settings after
the SQL Server VM is deployed was removed from the Azure portal in April 2023.
You can still specify these settings during SQL Server VM deployment, or use SQL
Server Management Studio (SSMS) to update these settings manually from within
the SQL Server VM after deployment.

Defender for Cloud


Use the Defender for SQL page of the SQL virtual machine's resource to view Defender
for Cloud recommendations directly in the SQL virtual machine blade. Enable Microsoft
Defender for SQL to leverage this feature.
SQL IaaS Agent Extension Settings
From the SQL IaaS Agent Extension Settings page, you can repair the extension and
you can enable auto upgrade to ensure you're automatically receiving updates for the
extension each month.

Next steps
For more information, see the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Windows VM
What's new for SQL Server on Azure VMs
Change the license model for a SQL
virtual machine in Azure
Article • 03/20/2023

Applies to:
SQL Server on Azure VM

This article describes how to change the license model for a SQL Server virtual machine
(VM) in Azure by using the SQL IaaS Agent Extension.

Overview
There are three license models for an Azure VM that's hosting SQL Server: pay-as-you-
go, Azure Hybrid Benefit (AHB), and High Availability/Disaster Recovery(HA/DR). You can
modify the license model of your SQL Server VM by using the Azure portal, the Azure
CLI, or PowerShell.

The pay-as-you-go model means that the per-second cost of running the Azure
VM includes the cost of the SQL Server license.
Azure Hybrid Benefit allows you to use your own SQL Server license with a VM
that's running SQL Server.
The HA/DR license type is used for the free HA/DR replica in Azure.

Azure Hybrid Benefit allows the use of SQL Server licenses with Software Assurance
("Qualified License") on Azure virtual machines. With Azure Hybrid Benefit, customers
aren't charged for the use of a SQL Server license on a VM. But they must still pay for
the cost of the underlying cloud compute (that is, the base rate), storage, and backups.
They must also pay for I/O associated with their use of the services (as applicable).

To estimate your cost savings with the Azure Hybrid benefit, use the Azure Hybrid
Benefit Savings Calculator . To estimate the cost of Pay as you Go licensing, review the
Azure Pricing Calculator .

According to the Microsoft Product Terms : "Customers must indicate that they are
using Azure SQL Database (Managed Instance, Elastic Pool, and Single Database), Azure
Data Factory, SQL Server Integration Services, or SQL Server Virtual Machines under
Azure Hybrid Benefit for SQL Server when configuring workloads on Azure."

To indicate the use of Azure Hybrid Benefit for SQL Server on Azure VM and be
compliant, you have three options:
Provision a virtual machine by using a bring-your-own-license SQL Server image
from Azure Marketplace. This option is available only for customers who have an
Enterprise Agreement.
Provision a virtual machine by using a pay-as-you-go SQL Server image from
Azure Marketplace and activate the Azure Hybrid Benefit.
Self-install SQL Server on Azure VM, manually register with the SQL IaaS Agent
Extension, and activate Azure Hybrid Benefit.

The license type of SQL Server can be configured when the VM is provisioned, or
anytime afterward. Switching between license models incurs no downtime, does not
restart the VM or the SQL Server service, doesn't add any additional costs, and is
effective immediately. In fact, activating Azure Hybrid Benefit reduces cost.

Prerequisites
Changing the licensing model of your SQL Server VM has the following requirements:

An Azure subscription .
A SQL Server VM registered with the SQL IaaS Agent Extension.
Software Assurance is a requirement to utilize the Azure Hybrid Benefit license
type, but pay-as-you-go customers can use the HA/DR license type if the VM is
being used as a passive replica in a high availability/disaster recovery
configuration.

Change license model


Azure portal

You can modify the license model directly from the portal:

1. Open the Azure portal and open the SQL virtual machines resource for your
SQL Server VM.
2. Select Configure under Settings.
3. Select the Azure Hybrid Benefit option, and select the check box to confirm
that you have a SQL Server license with Software Assurance.
4. Select Apply at the bottom of the Configure page.
Remarks
Azure Cloud Solution Provider (CSP) customers can use the Azure Hybrid Benefit
by first deploying a pay-as-you-go VM and then converting it to bring-your-own-
license, if they have active Software Assurance.
If you drop your SQL virtual machines resource, you will go back to the hard-coded
license setting of the image.
The ability to change the license model is a feature of the SQL IaaS Agent
Extension. Deploying an Azure Marketplace image through the Azure portal
automatically registers a SQL Server VM with the extension. But customers who are
self-installing SQL Server will need to manually register their SQL Server VM.
Adding a SQL Server VM to an availability set requires re-creating the VM. As such,
any VMs added to an availability set will go back to the default pay-as-you-go
license type. Azure Hybrid Benefit will need to be enabled again.

Limitations
Changing the license model is:

Only supported for the Standard and Enterprise editions of SQL Server. License
changes for Express, Web, and Developer are not supported.
Only supported for virtual machines deployed through the Azure Resource
Manager model. Virtual machines deployed through the classic model are not
supported.
Available only for the public or Azure Government clouds. Currently unavailable for
the Azure China region.

Additionally, changing the license model to Azure Hybrid Benefit requires Software
Assurance .
7 Note

Only SQL Server core-based licensing with Software Assurance or subscription


licenses are eligible for Azure Hybrid Benefit. If you are using Server + CAL licensing
for SQL Server and you have Software Assurance, you can use bring-your-own-
license to an Azure SQL Server virtual machine image to leverage license mobility
for these servers, but you cannot leverage the other features of Azure Hybrid
Benefit.

Remove a SQL Server instance and its


associated licensing and billing costs
Before you begin

To avoid being charged for your SQL Server instance, see Pricing guidance for SQL
Server on Azure VMs.

To remove a SQL Server instance and associated billing from a Pay-As-You-Go SQL
Server VM, or if you are being charged for a SQL instance after uninstalling it:

1. Back up your data.


2. If necessary, uninstall SQL Server, including the SQL IaaS Agent extension.
3. Download the free SQL Server Express edition.
4. Install the SQL IaaS Agent extension.
5. To stop billing, change edition in the portal to Express edition.

Optional

To disable the SQL Server Express edition service, disable service startup.

Common issues and questions related to licensing

Review the Licensing FAQ to see the most common questions.

Known errors
Review the commonly known errors and their resolutions.

The Resource 'Microsoft.SqlVirtualMachine/SqlVirtualMachines/<resource-group>'


under resource group '<resource-group>' was not found.
This error occurs when you try to change the license model on a SQL Server VM that has
not been registered with the SQL IaaS Agent extension:

The Resource 'Microsoft.SqlVirtualMachine/SqlVirtualMachines/\<resource-group>'

under resource group '\<resource-group>' was not found. The property


'sqlServerLicenseType' cannot be found on this object. Verify that the property

exists and can be set.

You'll need to register your SQL Server VM with the SQL IaaS Agent extension.

Change licensing to AHB, BYOL, HADR or PAYG

Make sure your subscription is registered with resource provider (RP).

The SQL IaaS Agent extension is required to change the license. Make sure you remove
and reinstall the SQL IaaS Agent extension if it's in a failed state.

SQL Server edition, version, or licensing on Azure Portal does not reflect correctly
after edition or version upgrade

Make sure your subscription is registered with resource provider (RP).

The SQL IaaS Agent extension is required to change the license. Make sure you repair
the extension if it's in a failed state.

Next steps
For more information, see the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Windows VM
What's new for SQL Server on Azure VMs
Overview of SQL IaaS Agent Extension
In-place change of SQL Server edition -
SQL Server on Azure VMs
Article • 06/23/2023

Applies to:
SQL Server on Azure VM

This article describes how to change the edition of SQL Server on a Windows virtual
machine in Azure.

The edition of SQL Server is determined by the product key, and is specified during the
installation process using the installation media. The edition dictates what features are
available in the SQL Server product. You can change the SQL Server edition with the
installation media and either downgrade to reduce cost or upgrade to enable more
features.

Once the edition of SQL Server has been changed internally to the SQL Server VM, you
must then update the edition property of SQL Server in the Azure portal for billing
purposes.

Prerequisites
To do an in-place change of the edition of SQL Server, you need the following:

An Azure subscription .
A SQL Server VM on Windows registered with the SQL IaaS Agent extension.
Setup media with the desired edition of SQL Server. Customers who have Software
Assurance can obtain their installation media from the Volume Licensing
Center . Customers who don't have Software Assurance can deploy an Azure
Marketplace SQL Server VM image with the desired edition of SQL Server and then
copy the setup media (typically located in C:\SQLServerFull ) from it to their target
SQL Server VM.

Upgrade an edition

2 Warning

Upgrading the edition of SQL Server will restart the service for SQL Server, along
with any associated services, such as Analysis Services and R Services.
To upgrade the edition of SQL Server, obtain the SQL Server setup media for the desired
edition of SQL Server, and then do the following:

1. Open Setup.exe from the SQL Server installation media.

2. Go to Maintenance and choose the Edition Upgrade option.

3. Select Next until you reach the Ready to upgrade edition page, and then select
Upgrade. The setup window might stop responding for a few minutes while the
change is taking effect. A Complete page will confirm that your edition upgrade is
finished.

4. After the SQL Server edition is upgraded, modify the edition property of the SQL
Server virtual machine in the Azure portal. This will update the metadata and
billing associated with this VM.

After you change the edition of SQL Server, register your SQL Server VM with the SQL
IaaS Agent extension again so that you can use the Azure portal to view the edition of
SQL Server. Then be sure to Change the edition of SQL Server in the Azure portal.

Downgrade an edition
To downgrade the edition of SQL Server, you need to completely uninstall SQL Server,
and reinstall it again with the desired edition setup media. You can get the setup media
by deploying a SQL Server VM from the marketplace image with your desired edition,
and then copying the setup media to the target SQL Server VM, or using the Volume
Licensing Center if you have software assurance.

2 Warning

Uninstalling SQL Server might incur additional downtime.

You can downgrade the edition of SQL Server by following these steps:

1. Back up all databases, including the system databases.


2. Move system databases (master, model, and msdb) to a new location.
3. Completely uninstall SQL Server and all associated services.
4. Restart the virtual machine.
5. Install SQL Server by using the media with the desired edition of SQL Server.
6. Install the latest service packs and cumulative updates.
7. Replace the new system databases that were created during installation with the
system databases that you previously moved to a different location.
8. After the SQL Server edition is downgraded, modify the edition property of the
SQL Server virtual machine in the Azure portal. This will update the metadata and
billing associated with this VM.

After you change the edition of SQL Server, register your SQL Server VM with the SQL
IaaS Agent extension again so that you can use the Azure portal to view the edition of
SQL Server. Then be sure to Change the edition of SQL Server in the Azure portal.

Change edition property for billing


Once you've modified the edition of SQL Server using the installation media, and you've
registered your SQL Server VM with the SQL IaaS Agent extension, you can then use the
Azure portal or the Azure CLI to modify the edition property of the SQL Server VM for
billing purposes.

Portal

To change the edition property of the SQL Server VM for billing purposes by using
the Azure portal, follow these steps:

1. Sign in to the Azure portal .

2. Go to your SQL Server virtual machine resource.


3. Under Settings, select Configure. Then select your desired edition of SQL
Server from the drop-down list under Edition.

4. Review the warning that says you must change the SQL Server edition first,
and that the edition property must match the SQL Server edition.

5. Select Apply to apply your edition metadata changes.

Remarks
The edition property for the SQL Server VM must match the edition of the SQL
Server instance installed for all SQL Server virtual machines, including both pay-as-
you-go and bring-your-own-license types of licenses.
If you drop your SQL Server VM resource, you will go back to the hard-coded
edition setting of the image.
The ability to change the edition is a feature of the SQL IaaS Agent extension.
Deploying an Azure Marketplace image through the Azure portal automatically
registers a SQL Server VM with the SQL IaaS Agent extension. However, customers
who are self-installing SQL Server will need to manually register their SQL Server
VM.
Adding a SQL Server VM to an availability set requires re-creating the VM. Any
VMs added to an availability set will go back to the default edition, and the edition
will need to be modified again.

Next steps
For more information, see the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Windows VM
What's new for SQL Server on Azure VMs
In-place change of SQL Server version -
SQL Server on Azure VMs
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

This article describes how to change the version of Microsoft SQL Server on a Windows
virtual machine (VM) in Microsoft Azure.

Planning for a version upgrade


Consider the following prerequisites before upgrading your version of SQL Server:

1. Decide what version of SQL Server you want to upgrade to:

What's new in SQL Server 2022


What's new in SQL Server 2019
What's new in SQL Server 2017

2. We recommend that you check the compatibility certification for the version that
you are going to change to so that you can use the database compatibility modes
to minimize the effect of the upgrade.

3. You can review to the following articles to help ensure a successful outcome:

Video: Modernizing SQL Server | Pam Lahoud & Pedro Lopes | 20 Years of
PASS
Database Experimentation Assistant for AB testing
Upgrading Databases by using the Query Tuning Assistant
Change the Database Compatibility Level and use the Query Store

Prerequisites
To do an in-place upgrade of SQL Server, you need the following:

SQL Server installation media. Customers who have Software Assurance can
obtain their installation media from the Volume Licensing Center . Customers
who don't have Software Assurance can deploy an Azure Marketplace SQL Server
VM image with the desired version of SQL Server and then copy the setup media
(typically located in C:\SQLServerFull ) from it to their target SQL Server VM.
Version upgrades should follow the support upgrade paths.
Upgrade SQL Version

2 Warning

Upgrading the version of SQL Server will restart the service for SQL Server in
addition to any associated services, such as Analysis Services and R Services.

To upgrade the version of SQL Server, obtain the SQL Server setup media for the later
version that would support the upgrade path of SQL Server, and do the following steps:

1. Back up the databases, including system (except tempdb) and user databases,
before you start the process. You can also create an application-consistent VM-
level backup by using Azure Backup Services.

2. Start Setup.exe from the SQL Server installation media.

3. The Installation Wizard starts the SQL Server Installation Center. To upgrade an
existing instance of SQL Server, select Installation on the navigation pane, and
then select Upgrade from an earlier version of SQL Server.

4. On the Product Key page, select an option to indicate whether you are upgrading
to a free edition of SQL Server or you have a PID key for a production version of
the product. For more information, see Editions and supported features of SQL
Server 2019 (15.x) and Supported version and edition Upgrades (SQL Server 2016).

5. Select Next until you reach the Ready to upgrade page, and then select Upgrade.
The setup window might stop responding for several minutes while the change is
taking effect. A Complete page will confirm that your upgrade is completed. For a
step-by-step procedure to upgrade, see the complete procedure.

If you have changed the SQL Server edition in addition to changing the version, also
update the edition, and refer to the Verify Version and Edition in Portal section to
change the SQL VM instance.
Downgrade the version of SQL Server
To downgrade the version of SQL Server, you have to completely uninstall SQL Server,
and reinstall it again by using the desired version. This is similar to a fresh installation of
SQL Server because you will not be able to restore the earlier database from a later
version to the newly installed earlier version. The databases will have to be re-created
from scratch. If you also changed the edition of SQL Server during the upgrade, change
the Edition property of the SQL Server VM in the Azure portal to the new edition value.
This updates the metadata and billing that is associated with this VM.

2 Warning

An in-place downgrade of SQL Server is not supported.

You can downgrade the version of SQL Server by following these steps:

1. Make sure that you are not using any feature that is available in the later version
only .

2. Back up all databases, including system (except tempdb) and user databases.

3. Export all the necessary server-level objects (such as server triggers, roles, logins,
linked servers, jobs, credentials, and certificates).
4. If you do not have scripts to re-create your user databases on the earlier version,
you must script out all objects and export all data by using BCP.exe, SSIS, or
DACPAC.

Make sure that you select the correct options when you script such items as the
target version, dependent objects, and advanced options.

5. Completely uninstall SQL Server and all associated services.

6. Restart the VM.

7. Install SQL Server by using the media for the desired version of the program.

8. Install the latest service packs and cumulative updates.

9. Import all the necessary server-level objects (that were exported in Step 3).

10. Re-create all the necessary user databases from scratch (by using created scripts or
the files from Step 4).

Verify the version and edition in the portal


After you change the version of SQL Server, register your SQL Server VM with the SQL
IaaS Agent extension again so that you can use the Azure portal to view the version of
SQL Server. The listed version number should now reflect the newly upgraded version
and edition of your SQL Server installation.
Remarks
We recommend that you initiate backups/update statistics/rebuild indexes/check
consistency after the upgrade is finished. You can also check the individual
database compatibility levels to make sure that they reflect your desired level.
After SQL Server is updated on the VM, make sure that the Edition property of SQL
Server in the Azure portal matches the installed edition number for billing.
The ability to change the edition is a feature of the SQL IaaS Agent extension.
Deploying an Azure Marketplace image through the Azure portal automatically
registers a SQL Server VM with the extension. However, customers who are self-
installing SQL Server will have to manually register their SQL Server VM.
If you drop your SQL Server VM resource, the hard-coded edition setting of the
image is restored.

Next steps
For more information, see the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Windows VM
What's new for SQL Server on Azure VMs
Configure storage for SQL Server VMs
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

This article teaches you how to configure your storage for your SQL Server on Azure
Virtual Machines (VMs).

SQL Server VMs deployed through marketplace images automatically follow default
storage best practices which can be modified during deployment. Some of these
configuration settings can be changed after deployment.

Prerequisites
To use the automated storage configuration settings, your virtual machine requires the
following characteristics:

Provisioned with a SQL Server gallery image.


Uses the Resource Manager deployment model.
Uses premium SSDs.

New VMs
The following sections describe how to configure storage for new SQL Server virtual
machines.

Azure portal
When provisioning an Azure VM using a SQL Server gallery image, select Change
configuration under Storage on the SQL Server Settings tab to open the Configure
storage page. You can either leave the values at default, or modify the type of disk
configuration that best suits your needs based on your workload.
Choose the drive location for your data files and log files, specifying the disk type, and
number of disks. Use the IOPS values to determine the best storage configuration to
meet your business needs. Choosing premium storage sets the caching to ReadOnly for
the data drive, and None for the log drive as per SQL Server VM performance best
practices.
The disk configuration is completely customizable so that you can configure the storage
topology, disk type and IOPs you need for your SQL Server VM workload. You also have
the ability to use UltraSSD (preview) as an option for the Disk type if your SQL Server
VM is in one of the supported regions (East US 2, SouthEast Asia and North Europe) and
you've enabled ultra disks for your subscription.

Configure your tempdb database settings under Tempdb storage, such as the location of
the database files, as well as the number of files, initial size, and autogrowth size in MB.
Currently, during deployment, the max number of tempdb files is 8, but more files can be
added after the SQL Server VM is deployed.

Additionally, you have the ability to set the caching for the disks. Azure VMs have a
multi-tier caching technology called Blob Cache when used with Premium Disks. Blob
Cache uses a combination of the Virtual Machine RAM and local SSD for caching.

Disk caching for Premium SSD can be ReadOnly, ReadWrite or None.

ReadOnly caching is highly beneficial for SQL Server data files that are stored on
Premium Storage. ReadOnly caching brings low read latency, high read IOPS, and
throughput as, reads are performed from cache, which is within the VM memory
and local SSD. These reads are much faster than reads from data disk, which is
from Azure Blob storage. Premium storage does not count the reads served from
cache towards the disk IOPS and throughput. Therefore, your applicable is able to
achieve higher total IOPS and throughput.

None cache configuration should be used for the disks hosting SQL Server Log file
as the log file is written sequentially and does not benefit from ReadOnly caching.
ReadWrite caching should not be used to host SQL Server files as SQL Server does
not support data consistency with the ReadWrite cache. Writes waste capacity of
the ReadOnly blob cache and latencies slightly increase if writes go through
ReadOnly blob cache layers.

 Tip

Be sure that your storage configuration matches the limitations imposed by


the the selected VM size. Choosing storage parameters that exceed the
performance cap of the VM size will result in warning: The desired
performance might not be reached due to the maximum virtual machine disk
performance cap . Either decrease the IOPs by changing the disk type, or

increase the performance cap limitation by increasing the VM size. This will
not stop provisioning.

Based on your choices, Azure performs the following storage configuration tasks after
creating the VM:

Creates and attaches Premium SSDs to the virtual machine.


Configures the data disks to be accessible to SQL Server.
Configures the data disks into a storage pool based on the specified size and
performance (IOPS and throughput) requirements.
Associates the storage pool with a new drive on the virtual machine.
Optimizes this new drive based on your specified workload type (Data
warehousing, Transactional processing, or General).

For a full walkthrough of how to create a SQL Server VM in the Azure portal, see the
provisioning tutorial.

Resource Manager templates


If you use the following Resource Manager templates, two premium data disks are
attached by default, with no storage pool configuration. However, you can customize
these templates to change the number of premium data disks that are attached to the
virtual machine.

Create VM with Automated Backup


Create VM with Automated Patching
Create VM with AKV Integration

Quickstart template
You can use the following quickstart template to deploy a SQL Server VM using storage
optimization.

Create VM with storage optimization


Create VM using UltraSSD

7 Note

Some VM sizes may not have temporary or local storage. If you deploy a SQL
Server on Azure VM without temporary storage, tempdb data and log files are
placed in the data folder.

Existing VMs
For existing SQL Server VMs, you can modify some storage settings in the Azure portal.
Open your SQL virtual machines resource, and select Overview. The SQL Server
Overview page shows the current storage usage of your VM. All drives that exist on
your VM are displayed in this chart. For each drive, the storage space displays in four
sections:

SQL data
SQL log
Other (non-SQL storage)
Available

To modify the storage settings, select Storage configuration under Settings.

You can modify the disk settings for the drives that were configured during the SQL
Server VM creation process. Selecting Configure opens the drive modification page,
allowing you to change the disk type, as well as add additional disks.
You can also configure the settings for tempdb directly from the Azure portal, such as the
number of data files, their initial size, and the autogrowth ratio. See configure tempdb
to learn more.

Automated changes
This section provides a reference for the storage configuration changes that Azure
automatically performs during SQL Server VM provisioning or configuration in the Azure
portal.

Azure configures a storage pool from storage selected from your VM. The next
section of this topic provides details about storage pool configuration.
Automatic storage configuration always uses premium SSDs P30 data disks.
Consequently, there is a 1:1 mapping between your selected number of Terabytes
and the number of data disks attached to your VM.

For pricing information, see the Storage pricing page on the Disk Storage tab.

Creation of the storage pool


Azure uses the following settings to create the storage pool on SQL Server VMs.

Setting Value

Stripe size 256 KB (Data warehousing); 64 KB (Transactional)

Disk sizes 1 TB each

Cache Read

Allocation size 64 KB NTFS allocation unit size

Recovery Simple recovery (no resiliency)

Number of columns Number of data disks up to 81


1 After the storage pool is created, you cannot alter the number of columns in the
storage pool.

Workload optimization settings


The following table describes the three workload type options available and their
corresponding optimizations:

Workload type Description Optimizations

General Default setting that supports most workloads None

Transactional Optimizes the storage for traditional database OLTP Trace Flag
processing workloads 1117

Trace Flag
1118

Data warehousing Optimizes the storage for analytic and reporting Trace Flag 610

workloads Trace Flag


1117

7 Note

You can only specify the workload type when you provision a SQL Server virtual
machine by selecting it in the storage configuration step.

Enable caching
Change the caching policy at the disk level. You can do so using the Azure portal,
PowerShell, or the Azure CLI.

To change your caching policy in the Azure portal, follow these steps:

1. Stop your SQL Server service.

2. Sign into the Azure portal .

3. Navigate to your virtual machine, select Disks under Settings.


4. Choose the appropriate caching policy for your disk from the drop-down.

5. After the change takes effect, reboot the SQL Server VM and start the SQL Server
service.

Enable Write Accelerator


Write Acceleration is a disk feature that is only available for the M-Series Virtual
Machines (VMs). The purpose of write acceleration is to improve the I/O latency of
writes against Azure Premium Storage when you need single digit I/O latency due to
high volume mission critical OLTP workloads or data warehouse environments.
Stop all SQL Server activity and shut down the SQL Server service before making
changes to your write acceleration policy.

If your disks are striped, enable Write Acceleration for each disk individually, and your
Azure VM should be shut down before making any changes.

To enable Write Acceleration using the Azure portal, follow these steps:

1. Stop your SQL Server service. If your disks are striped, shut down the virtual
machine.

2. Sign into the Azure portal .

3. Navigate to your virtual machine, select Disks under Settings.

4. Choose the cache option with Write Accelerator for your disk from the drop-
down.
5. After the change takes effect, start the virtual machine and SQL Server service.

Disk striping
For more throughput, you can add additional data disks and use disk striping. To
determine the number of data disks, analyze the throughput and bandwidth required
for your SQL Server data files, including the log and tempdb. Throughput and
bandwidth limits vary by VM size. To learn more, see VM Size

For Windows 8/Windows Server 2012 or later, use Storage Spaces with the
following guidelines:

1. Set the interleave (stripe size) to 64 KB (65,536 bytes) to avoid performance


impact due to partition misalignment. This must be set with PowerShell.

2. Set column count = number of physical disks. Use PowerShell when


configuring more than 8 disks (not Server Manager UI).

For example, the following PowerShell creates a new storage pool with the interleave
size to 64 KB and the number of columns equal to the amount of physical disk in the
storage pool:

Windows Server 2016 +

PowerShell

$PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like


"*2" -or $_.FriendlyName -like "*3"}

New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName


"Windows Storage on <VM Name>" `

-PhysicalDisks $PhysicalDisks | New-VirtualDisk -FriendlyName


"DataFiles" `

-Interleave 65536 -NumberOfColumns $PhysicalDisks.Count -


ResiliencySettingName simple `

-UseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-


Partition -AssignDriveLetter `

-UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel


"DataDisks" `

-AllocationUnitSize 65536 -Confirm:$false

In Windows Server 2016 and later, the default value for -


StorageSubsystemFriendlyName is Windows Storage on <VM Name>
For Windows 2008 R2 or earlier, you can use dynamic disks (OS striped volumes)
and the stripe size is always 64 KB. This option is deprecated as of Windows
8/Windows Server 2012. For information, see the support statement at Virtual Disk
Service is transitioning to Windows Storage Management API.

If you are using Storage Spaces Direct (S2D) with SQL Server Failover Cluster
Instances, you must configure a single pool. Although different volumes can be
created on that single pool, they will all share the same characteristics, such as the
same caching policy.

Determine the number of disks associated with your storage pool based on your
load expectations. Keep in mind that different VM sizes allow different numbers of
attached data disks. For more information, see Sizes for virtual machines.

Known issues

Configure Disk option or Storage Configuration blade on


SQL virtual machine resource is grayed out
The Storage Configuration blade can be grayed out in the Azure portal if your SQL IaaS
Agent extension is in a failed state. Repair the SQL IaaS Agent extension.

Configure on the Storage Configuration blade can be grayed out if you've customized
your storage pool, or if you are using a non-Marketplace image.

I have a disk with 1TB of unallocated space that I cannot


remove from storage pool
There is no option to remove the unallocated space from a disk that belongs to a
storage pool.

Next steps
For other topics related to running SQL Server in Azure VMs, see SQL Server on Azure
Virtual Machines.
Enable Azure AD authentication for SQL
Server on Azure VMs
Article • 05/25/2023

Applies to:
SQL Server on Azure VM

This article teaches you to enable Azure Active Directory (Azure AD) authentication for
your SQL Server on Azure virtual machines (VMs).

Overview
Starting with SQL Server 2022, you can connect to SQL Server on Azure VMs using one
of the following Azure AD identity authentication methods:

Azure AD Password
Azure AD Integrated
Azure AD Universal with Multi-Factor Authentication
Azure Active Directory access token

When you create an Azure AD login for SQL Server and when a user logs into SQL
Server using the Azure AD login, SQL Server uses a managed identity to query Microsoft
Graph. When you enable Azure AD authentication for your SQL Server on Azure VM, you
need to provide a managed identity that SQL Server can use to communicate with Azure
AD. This managed identity needs to have permission to query Microsoft Graph.

When enabling a managed identity for a resource in Azure, the security boundary of the
identity is the resource to which it's attached. For example, the security boundary for a
virtual machine with managed identities for Azure resources enabled is the virtual
machine. Any code running on that VM is able to call the managed identities endpoint
and request tokens. When enabling a managed identity for SQL Server on Azure VMs,
the identity is attached to the virtual machine, so the security boundary is the virtual
machine. The experience is similar when working with other resources that support
managed identities. For more information, read the Managed Identities FAQ.

Azure AD authentication with SQL Server on Azure VMs uses either a system-assigned
VM managed identity, or a user-assigned managed identity, which offer the following
benefits:

System-assigned managed identity offers a simplified configuration process. Since


the managed identity has the same lifetime as the virtual machine, there's no need
to delete it separately when you delete the virtual machine.
User-assigned managed identity offers scalability since it can be attached to, and
used for Azure AD authentication, for multiple SQL Server on Azure VMs.

To get started with managed identities, review Configure managed identities using the
Azure portal.

Prerequisites
To enable Azure AD authentication on your SQL Server, you need the following
prerequisites:

Use SQL Server 2022.


Register SQL Server VM with the SQL Server Iaas Agent extension.
Have an existing system-assigned or user-assigned managed identity in the same
Azure AD tenant as your SQL Server VM. Configure managed identities using the
Azure portal to learn more.
Azure CLI 2.48.0 or later if you intend to use the Azure CLI to configure Azure AD
authentication for your SQL Server VM.

Grant permissions
The managed identity you choose to facilitate authentication between SQL Server and
Azure AD has to have the following three Microsoft Graph application permissions (app
roles): User.ReadALL , GroupMember.Read.All , and Application.Read.All .

Alternatively, adding the managed identity to the Azure AD Directory Readers role
grants sufficient permissions. Another way to assign the Directory Readers role to a
managed identity is to assign the Directory Readers role to a group in Azure AD. The
group owners can then add the Virtual Machine managed identity as a member of this
group. This minimizes involving Azure AD Global administrators and delegates the
responsibility to the group owners.

Add managed identity to the role


The steps in this section demonstrate how to add your managed identity to the Azure
AD Directory Readers role. You need to have Azure AD Global administrator privileges
to make changes to the Directory Readers role assignments. If you don't have sufficient
permission, work with your Azure AD administrator to follow the steps in the section
and grant Azure AD Directory Readers role permissions to the managed identity you
want to use to help authenticate to your SQL Server on your Azure VM.
To grant your managed identity the Azure AD Directory role permission, follow these
steps:

1. Go to Azure Active Directory in the Azure portal .

2. On the Azure Active Directory overview page, choose Roles and administrators
under Manage:

3. Type Directory readers in the search box, and then select the role Directory readers
to open the Directory Readers | Assignments page:

4. On the Directory Readers | Assignments page, select + Add assignments to open


the Add assignment page.
5. On the Add assignments page, choose No member selected under Select
members to open the Select a member page.
6. On the Select a member page, search for the managed identity you want to use
with your SQL Server VM and add to the Azure AD Directory Readers role. If you
want to use a system-assigned managed identity, search for the name of the VM
and select the associated identity. If you want to use a user-managed identity, then
search for the name of the identity and choose it. Select Select to save your
identity selection and go back to the Add assignments page.

7. Verify that you see your chosen identity under Select members and then select
Next.
8. Verify that your assignment type is set to Active and the box next to Permanently
assigned is checked. Enter a business justification, such as Adding Directory Reader
role permissions to the system-assigned identity for VM2 and then select Assign to
save your settings and go back to the Directory Readers | Assignments page.
9. On the Directory Readers | Assignments page, confirm you see your newly added
identity under Directory Readers.
Add app role permissions
You can use Azure PowerShell to grant app roles to a managed identity. To do so, follow
these steps:

1. Search for Microsoft Graph

PowerShell

$AAD_SP = Get-AzureADServicePrincipal -Filter "DisplayName eq


'Microsoft Graph'"

2. Retrieve the managed identity:

PowerShell

$MI = Get-AzureADServicePrincipal -Filter "DisplayName eq '<your


managed identity display name>'"

3. Assign the User.Read.All role to the identity:

PowerShell

$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq


"User.Read.All"}

New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -


PrincipalId $MSI.ObjectId

-ResourceId $AAD_SP.ObjectId -Id $AAD_AppRole.Id

4. Assign GroupMember.Read.All role to the identity:

PowerShell
$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq
"GroupMember.Read.All"}

New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -


PrincipalId $MSI.ObjectId

-ResourceId $AAD_SP.ObjectId -Id $AAD_AppRole.Id

5. Assign Application.Read.All role to the identity:

PowerShell

$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq


"Application.Read.All"}

New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -


PrincipalId $MSI.ObjectId

-ResourceId $AAD_SP.ObjectId -Id $AAD_AppRole.Id

You can validate permissions were assigned to the managed identity by doing the
following:

1. Go to Azure Active Directory in the Azure portal .


2. Choose Enterprise applications and then select All applications under Manage.
3. Select the managed identity and then choose Permissions under Security. You
should see the following permissions: User.Read.All , GroupMember.Read.All ,
Application.Read.All .

Enable outbound communication


For Azure AD authentication to work, you need the following:

Outbound communication from SQL Server to Azure AD and the Microsoft Graph
endpoint.
Outbound communication from the SQL client to Azure AD.

Default Azure VM configurations allow outbound communication to the Microsoft


Graph endpoint, as well as Azure AD, but some users choose to restrict outbound
communication either by using an OS level firewall, or the Azure VNet network security
group (NSG).

Firewalls on the SQL Server VM and any SQL client need to allow outbound traffic on
ports 80 and 443.

The Azure VNet NSG rule for the VNet that hosts your SQL Server VM should have the
following:
A Service Tag of AzureActiveDirectory .
Destination port ranges of: 80, 443.
Action set to Allow.
A high priority (which is a low number).

Enable Azure AD authentication


You can enable Azure AD authentication to your SQL Server VM by using the Azure
portal, or the Azure CLI.

7 Note

After Azure AD authentication is enabled, you can follow the same steps in this
section to change the configuration to use a different managed identity.

Portal

To enable Azure AD authentication to your SQL Server VM, follow these steps:

1. Navigate to your SQL virtual machines resource in the Azure portal.

2. Select Security configuration under Settings.

3. Choose Enable under Azure AD authentication.

4. Choose the managed identity type from the drop-down, either System-
assigned or User-assigned. If you choose user-assigned, then select the
identity you want to use to authenticate to SQL Server on your Azure VM from
the User-assigned managed identity drop-down that appears.
After Azure AD has been enabled, you can follow the same steps to change which
managed identity can authenticate to your SQL Server VM.

7 Note

The error The selected managed identity does not have enough permissions
for Azure AD Authentication indicates that permissions have not been

properly assigned to the identity you've selected. Check the Grant permissions
section to assign proper permissions.

Limitations
Consider the following limitations:

Azure AD authentication is only supported with Windows SQL Server 2022 VMs
registered with the SQL IaaS Agent extension and deployed to the public cloud.
The identity you choose to authenticate to SQL Server has to have either the Azure
AD Directory Readers role permissions or the following three Microsoft Graph
application permissions (app roles): User.ReadALL , GroupMember.Read.All , and
Application.Read.All .

Once Azure AD authentication is enabled, there's no way to disable it.


Currently, authenticating to SQL Server on Azure VMs through Azure AD
authentication using the FIDO2 method isn't supported.

Next steps
Review the security best practices for SQL Server.

For other articles related to running SQL Server in Azure VMs, see SQL Server on Azure
Virtual Machines overview. If you have questions about SQL Server virtual machines, see
the Frequently asked questions.

To learn more, see the other articles in this best practices series:

Quick checklist
VM size
Storage
HADR settings
Collect baseline
Automated Patching for SQL Server on
Azure virtual machines
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

Automated Patching establishes a maintenance window for an Azure virtual machine


running SQL Server. Automated Updates can only be installed during this maintenance
window. For SQL Server, this restriction ensures that system updates and any associated
restarts occur at the best possible time for the database.

) Important

Only Windows and SQL Server updates marked as Important or Critical are
installed. Other SQL Server updates, such as service packs and cumulative updates
that are not marked as Important or Critical, must be installed manually.

Prerequisites
To use Automated Patching, you need the following prerequisites:

Automated Patching relies on the SQL Server IaaS Agent Extension. Current SQL
virtual machine gallery images add this extension by default. For more information,
review SQL Server IaaS Agent Extension.
Install the latest Azure PowerShell commands if you plan to configure Automated
Patching by using PowerShell.

Automated Patching is supported starting with SQL Server 2008 R2 on Windows Server
2008 R2.

Additionally, consider the following:

There are also several other ways to enable automatic patching of Azure VMs, such
as Update Management or Automatic VM guest patching. Choose only one option
to automatically update your VM as overlapping tools may lead to failed updates.
If you want to receive ESU updates without using the automated patching feature,
you can use the built-in Windows Update channel.
For SQL Server VMs in different availability zones that participate in an Always On
availability group, configure the automated patching schedule so that availability
replicas in different availability zones aren't patched at the same time.
Settings
The following table describes the options that can be configured for Automated
Patching. The actual configuration steps vary depending on whether you use the Azure
portal or Azure Windows PowerShell commands.

Setting Possible values Description

Automated Enable/Disable (Disabled) Enables or disables Automated Patching for an


Patching Azure virtual machine.

Maintenance Everyday, Monday, Tuesday, The schedule for downloading and installing
schedule Wednesday, Thursday, Friday, Windows, SQL Server, and Microsoft updates
Saturday, Sunday for your virtual machine.

Maintenance 0-24 The local start time to update the virtual


start hour machine.

Maintenance 30-180 The number of minutes permitted to complete


window the download and installation of updates.
duration

Patch Important The category of Windows updates to download


Category and install.

Configure in the Azure portal


You can use the Azure portal to configure Automated Patching during provisioning or
for existing VMs.

New VMs
Use the Azure portal to configure Automated Patching when you create a new SQL
Server virtual machine in the Resource Manager deployment model.

On the SQL Server settings tab, select Change configuration under Automated
patching. The following Azure portal screenshot shows the SQL Automated Patching
blade.
For more information, see Provision a SQL Server virtual machine on Azure.

Existing VMs
For existing SQL Server virtual machines, open your SQL virtual machines resource and
select Patching under Settings.

When you're finished, select the OK button on the bottom of the SQL Server
configuration blade to save your changes.
If you're enabling Automated Patching for the first time, Azure configures the SQL
Server IaaS Agent in the background. During this time, the Azure portal might not show
that Automated Patching is configured. Wait several minutes for the agent to be
installed and configured. After that the Azure portal reflects the new settings.

Configure with PowerShell


After provisioning your SQL VM, use PowerShell to configure Automated Patching.

In the following example, PowerShell is used to configure Automated Patching on an


existing SQL Server VM. The New-AzVMSqlServerAutoPatchingConfig command
configures a new maintenance window for automatic updates.

Azure PowerShell

$vmname = "vmname"

$resourcegroupname = "resourcegroupname"

$aps = New-AzVMSqlServerAutoPatchingConfig -Enable -DayOfWeek "Thursday" -


MaintenanceWindowStartingHour 11 -MaintenanceWindowDuration 120 -
PatchCategory "Important"

Set-AzVMSqlServerExtension -AutoPatchingSettings $aps -VMName $vmname -


ResourceGroupName $resourcegroupname

Based on this example, the following table describes the practical effect on the target
Azure VM:

Parameter Effect

DayOfWeek Patches installed every Thursday.

MaintenanceWindowStartingHour Begin updates at 11:00am.

MaintenanceWindowsDuration Patches must be installed within 120 minutes. Based on the


start time, they must complete by 1:00pm.

PatchCategory The only possible setting for this parameter is Important.


This installs Windows update marked Important; it doesn't
install any SQL Server updates that aren't included in this
category.

It could take several minutes to install and configure the SQL Server IaaS Agent.

To disable Automated Patching, run the same script without the -Enable parameter to
the New-AzVMSqlServerAutoPatchingConfig. The absence of the -Enable parameter
signals the command to disable the feature.
Next steps
For information about other available automation tasks, see SQL Server IaaS Agent
Extension.

For more information about running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines overview.
SQL best practices assessment for SQL
Server on Azure VMs
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

The SQL best practices assessment feature of the Azure portal identifies possible
performance issues and evaluates that your SQL Server on Azure Virtual Machines (VMs)
is configured to follow best practices using the rich ruleset provided by the SQL
Assessment API.

To learn more, watch this video on SQL best practices assessment:

Overview
Once the SQL best practices assessment feature is enabled, your SQL Server instance
and databases are scanned to provide recommendations for things like indexes,
deprecated features, enabled or missing trace flags, statistics, etc. Recommendations are
surfaced to the SQL VM management page of the Azure portal .

Assessment results are uploaded to your Log Analytics workspace using Microsoft
Monitoring Agent (MMA). If your VM is already configured to use Log Analytics, the SQL
best practices assessment feature uses the existing connection. Otherwise, the MMA
extension is installed to the SQL Server VM and connected to the specified Log Analytics
workspace.
Assessment run time depends on your environment (number of databases, objects, and
so on), with a duration from a few minutes, up to an hour. Similarly, the size of the
assessment result also depends on your environment. Assessment runs against your
instance and all databases on that instance. In our testing, we observed that an
assessment run can have up to 5-10% CPU impact on the machine. In these tests, the
assessment was done while a TPC-C like application was running against the SQL Server.

Prerequisites
To use the SQL best practices assessment feature, you must have the following
prerequisites:

Your SQL Server VM must be registered with the SQL Server IaaS extension.
A Log Analytics workspace in the same subscription as your SQL Server VM to
upload assessment results to.
SQL Server needs to be 2012 or higher version.

Enable
You can enable SQL best practices assessments using the Azure portal or the Azure CLI.

Azure portal

To enable SQL best practices assessments using the Azure portal, follow these steps:

1. Sign into the Azure portal and go to your SQL Server VM resource .
2. Select SQL best practices assessments under Settings.
3. Select Enable SQL best practices assessments or Configuration to navigate to
the Configuration page.
4. Check the Enable SQL best practices assessments box and provide the
following:
a. The Log Analytics workspace that assessments will be uploaded to. If the
SQL Server VM has not been associated with a workspace previously, then
choose an existing workspace in the subscription from the drop-down.
Otherwise, the previously-associated workspace is already populated.
b. The Run schedule. You can choose to run assessments on demand, or
automatically on a schedule. If you choose a schedule, then provide the
frequency (weekly or monthly), day of week, recurrence (every 1-6 weeks),
and the time of day your assessments should start (local to VM time).
5. Select Apply to save your changes and deploy the Microsoft Monitoring
Agent to your SQL Server VM if it's not deployed already. An Azure portal
notification will tell you once the SQL best practices assessment feature is
ready for your SQL Server VM.

Assess SQL Server VM


Assessments run:

On a schedule
On demand

Run scheduled assessment


You can configure assessment on a schedule using the Azure portal and the Azure CLI.

Azure portal

If you set a schedule in the configuration blade, an assessment runs automatically


at the specified date and time. Choose Configuration to modify your assessment
schedule. Once you provide a new schedule, the previous schedule is overwritten.

Run on demand assessment


After the SQL best practices assessment feature is enabled for your SQL Server VM, it's
possible to run an assessment on demand using the Azure portal, or the Azure CLI.

Azure portal

To run an on-demand assessment by using the Azure portal, select Run assessment
from the SQL best practices assessment blade of the Azure portal SQL Server VM
resource page.

View results
The Assessments results section of the SQL best practices assessments page shows a
list of the most recent assessment runs. Each row displays the start time of a run and the
status - scheduled, running, uploading results, completed, or failed. Each assessment run
has two parts: evaluates your instance, and uploads the results to your Log Analytics
workspace. The status field covers both parts. Assessment results are shown in Azure
workbooks.

Access the assessment results Azure workbook in three ways:

Select the View latest successful assessment button on the SQL best practices
assessments page.
Choose a completed run from the Assessment results section of the SQL best
practices assessments page.
Select View assessment results from the Top 10 recommendations surfaced on the
Overview page of your SQL VM resource page.

Once you have the workbook open, you can use the drop-down to select previous runs.
You can view the results of a single run using the Results page or review historical
trends using the Trends page.

Results page
The Results page organizes the recommendations using tabs for All, new, resolved. Use
these tabs to view all recommendations from the current run, all the new
recommendations (the delta from previous runs), or resolved recommendations from
previous runs. Tabs help you track progress between runs. The Insights tab identifies the
most recurring issues and the databases with the most issues. Use these to decide
where to concentrate your efforts.

The graph groups assessment results in different categories of severity - high, medium,
low, and information. Select each category to see the list of recommendations, or search
for key phrases in the search box. It's best to start with the most severe
recommendations and go down the list.

The first grid shows you each recommendation and the number of instances your
environment hit that issue. When you select a row in the first grid, the second grid lists
all the instances for that particular recommendation. If there is no selection in the first
grid, the second grid shows all recommendations. Potentially this could be a big list. You
can use the drop downs above the grid (Name, Severity, Tags, Check Id) to filter the
results. You can also use Export to Excel and Open the last run query in the Logs view
options by selecting the small icons on the top right corner of each grid.

The passed section of the graph identifies recommendations your system already
follows.

View detailed information for each recommendation by selecting the Message field,
such as a long description, and relevant online resources.
Trends page
There are three charts on the Trends page to show changes over time: all issues, new
issues, and resolved issues. The charts help you see your progress. Ideally, the number
of recommendations should go down while the number of resolved issues goes up. The
legend shows the average number of issues for each severity level. Hover over the bars
to see the individual vales for each run.

If there are multiple runs in a single day, only the latest run is included in the graphs on
the Trends page.

Enable for all VMs in a subscription


You can use the Azure CLI to enable the SQL best practices assessment feature on all
SQL Server VMs within a subscription. To do so, use the following example script:

azure-cli

# This script is formatted for use with Az CLI on Windows PowerShell. You
may need to update the script for use with Az CLI on other shells.

# This script enables SQL best practices assessment feature for all SQL
Servers on Azure VMs in a given subscription. It configures the VMs to use a
Log Analytics workspace to upload assessment results. It sets a schedule to
start an assessment run every Sunday at 11pm (local VM time).

# Please note that if a VM is already associated with another Log Analytics


workspace, it will give an error.

$subscriptionId = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'

# Resource Group where the Log Analytics workspace belongs

$myWsRg = 'myWsRg'

# Log Analytics workspace where assessment results will be stored

$myWsName = 'myWsName'

# Ensure in correct subscription

az account set --subscription $subscriptionId

$sqlvms = az sql vm list | ConvertFrom-Json

foreach ($sqlvm in $sqlvms)

echo "Configuring feature on $($sqlvm.id)"

az sql vm update --assessment-weekly-interval 1 --assessment-day-of-week


Sunday --assessment-start-time-local "23:00" --workspace-name $myWsName --
workspace-rg $myWsRg -g $sqlvm.resourceGroup -n $sqlvm.name

# Alternatively you can use this command to only enable the feature
without setting a schedule

# az sql vm update --enable-assessment true --workspace-name $myWsName --


workspace-rg $myWsRg -g $sqlvm.resourceGroup -n $sqlvm.name

# You can use this command to start an on-demand assessment on each VM

# az sql vm start-assessment -g $sqlvm.resourceGroup -n $sqlvm.name

Known Issues
You may encounter some of the following known issues when using SQL best practices
assessments.

Configuration error for Enable SQL best practices


assessment
If your virtual machine is already associated with a Log Analytics workspace that you
don't have access to or that is in another subscription, you will see an error in the
configuration blade. For the former, you can either obtain permissions for that
workspace or switch your VM to a different Log Analytics workspace by following these
instructions to remove Microsoft Monitoring Agent.

Deployment failure for Enable or Run Assessment


Refer to the deployment history of the resource group containing the SQL VM to view
the error message associated with the failed action.

Failed assessments
If the assessment or uploading the results failed for some reason, the status of that run
will indicate the failure. Clicking on the status will open a context pane where you can
see the details about the failure and possible ways to remediate the issue.

 Tip

If you have enforced TLS 1.0 or higher in Windows and disabled older SSL protocols
as described here, then you must also ensure that .NET Framework is configured to
use strong cryptography.

Next steps
To register your SQL Server VM with the SQL Server IaaS extension to SQL Server
on Azure VMs, see the articles for Automatic installation, Single VMs, or VMs in
bulk.
To learn about more capabilities available by the SQL Server IaaS extension to SQL
Server on Azure VMs, see Manage SQL Server VMs by using the Azure portal
Configure Azure Key Vault integration
for SQL Server on Azure VMs (Resource
Manager)
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

There are multiple SQL Server encryption features, such as transparent data encryption
(TDE), column level encryption (CLE), and backup encryption. These forms of encryption
require you to manage and store the cryptographic keys you use for encryption. The
Azure Key Vault service is designed to improve the security and management of these
keys in a secure and highly available location. The SQL Server Connector enables SQL
Server to use these keys from Azure Key Vault.

If you are running SQL Server on-premises, there are steps you can follow to access
Azure Key Vault from your on-premises SQL Server instance. But for SQL Server on Azure
VMs, you can save time by using the Azure Key Vault Integration feature.

7 Note

The Azure Key Vault integration is available only for the Enterprise, Developer, and
Evaluation Editions of SQL Server. Starting with SQL Server 2019, Standard edition is
also supported.

When this feature is enabled, it automatically installs the SQL Server Connector,
configures the EKM provider to access Azure Key Vault, and creates the credential to
allow you to access your vault. If you looked at the steps in the previously mentioned
on-premises documentation, you can see that this feature automates steps 2 and 3. The
only thing you would still need to do manually is to create the key vault and keys. From
there, the entire setup of your SQL Server VM is automated. Once this feature has
completed this setup, you can execute Transact-SQL (T-SQL) statements to begin
encrypting your databases or backups as you normally would.

7 Note

You can also configure Key Vault integration by using a template. For more
information, see Azure quickstart template for Azure Key Vault integration .
Prepare for AKV Integration
To use Azure Key Vault Integration to configure your SQL Server VM, there are several
prerequisites:

1. Install Azure PowerShell


2. Create an Azure Active Directory
3. Create a key vault

The following sections describe these prerequisites and the information you need to
collect to later run the PowerShell cmdlets.

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

Install Azure PowerShell


Make sure you have installed the latest Azure PowerShell module. For more information,
see How to install and configure Azure PowerShell.

Register an application in your Azure Active Directory


First, you need to have an Azure Active Directory (AAD) in your subscription. Among
many benefits, this allows you to grant permission to your key vault for certain users
and applications.

Next, register an application with AAD. This will give you a Service Principal account that
has access to your key vault, which your VM will need. In the Azure Key Vault article, you
can find these steps in the Register an application with Azure Active Directory section, or
you can see the steps with screenshots in the Get an identity for the application
section of this blog post. Before completing these steps, you need to collect the
following information during this registration that is needed later when you enable
Azure Key Vault Integration on your SQL VM.

After the application is added, find the Application ID (also known as AAD ClientID
or AppID) on the Registered app blade.
The application ID is assigned later to the
$spName (Service Principal name) parameter in the PowerShell script to enable
Azure Key Vault Integration.

During these steps when you create your key, copy the secret for your key as is
shown in the following screenshot. This key secret is assigned later to the
$spSecret (Service Principal secret) parameter in the PowerShell script.

The application ID and the secret will also be used to create a credential in SQL
Server.

You must authorize this new application ID (or client ID) to have the following
access permissions: get, wrapKey, unwrapKey. This is done with the Set-
AzKeyVaultAccessPolicy cmdlet. For more information, see Azure Key Vault
overview.

Create a key vault


In order to use Azure Key Vault to store the keys you will use for encryption in your VM,
you need access to a key vault. If you have not already set up your key vault, create one
by following the steps in the Getting Started with Azure Key Vault article. Before
completing these steps, there is some information you need to collect during this set up
that is needed later when you enable Azure Key Vault Integration on your SQL VM.
Azure PowerShell

New-AzKeyVault -VaultName 'ContosoKeyVault' -ResourceGroupName


'ContosoResourceGroup' -Location 'East Asia'

When you get to the Create a key vault step, note the returned vaultUri property, which
is the key vault URL. In the example provided in that step, shown below, the key vault
name is ContosoKeyVault, therefore the key vault URL would be
https://contosokeyvault.vault.azure.net/ .

The key vault URL is assigned later to the $akvURL parameter in the PowerShell script to
enable Azure Key Vault Integration.

After the key vault is created, we need to add a key to the key vault, this key will be
referred when we create an asymmetric key create in SQL Server later.

7 Note

Extensible Key Management (EKM) Provider version 1.0.4.0 is installed on the SQL
Server VM through the SQL infrastructure as a service (IaaS) extension. Upgrading
the SQL IaaS Agent extension will not update the provider version. Please
considering manually upgrading the EKM provider version if needed (for example,
when migrating to a SQL Managed Instance).

Enable and configure Key Vault integration


You can enable Key Vault integration during provisioning or configure it for existing
VMs.

New VMs
If you are provisioning a new SQL virtual machine with Resource Manager, the Azure
portal provides a way to enable Azure Key Vault integration.
For a detailed walkthrough of provisioning, see Provision a SQL virtual machine in the
Azure portal.

Existing VMs
For existing SQL virtual machines, open your SQL virtual machines resource and select
Security under Settings. Select Enable to enable Azure Key Vault integration.

The following screenshot shows how to enable Azure Key Vault in the portal for an
existing SQL Server VM (this SQL Server instance uses a non-default port 1401):
When you're finished, select the Apply button on the bottom of the Security page to
save your changes.

7 Note

The credential name we created here will be mapped to a SQL login later. This
allows the SQL login to access the key vault.

After enabling Azure Key Vault Integration, you can enable SQL Server encryption on
your SQL VM. First, you will need to create an asymmetric key inside your key vault and
a symmetric key within SQL Server on your VM. Then, you will be able to execute T-SQL
statements to enable encryption for your databases and backups.

There are several forms of encryption you can take advantage of:

Transparent Data Encryption (TDE)


Encrypted backups
Column Level Encryption (CLE)

The following Transact-SQL scripts provide examples for each of these areas.

Prerequisites for examples


Each example is based on the two prerequisites: an asymmetric key from your key vault
called CONTOSO_KEY and a credential created by the AKV Integration feature called
Azure_EKM_cred. The following Transact-SQL commands setup these prerequisites for
running the examples.
SQL

USE master;

GO

--create credential

--The <<SECRET>> here requires the <Application ID> (without hyphens) and
<Secret> to be passed together without a space between them.

CREATE CREDENTIAL Azure_EKM_cred

WITH IDENTITY = 'keytestvault', --keyvault

SECRET = '<<SECRET>>'

FOR CRYPTOGRAPHIC PROVIDER AzureKeyVault_EKM_Prov;

--Map the credential to a SQL login that has sysadmin permissions. This
allows the SQL login to access the key vault when creating the asymmetric
key in the next step.

ALTER LOGIN [SQL_Login]

ADD CREDENTIAL Azure_EKM_cred;

CREATE ASYMMETRIC KEY CONTOSO_KEY

FROM PROVIDER [AzureKeyVault_EKM_Prov]

WITH PROVIDER_KEY_NAME = 'KeyName_in_KeyVault', --The key name here


requires the key we created in the key vault

CREATION_DISPOSITION = OPEN_EXISTING;

Transparent Data Encryption (TDE)


1. Create a SQL Server login to be used by the Database Engine for TDE, then add the
credential to it.

SQL

USE master;

-- Create a SQL Server login associated with the asymmetric key

-- for the Database engine to use when it loads a database

-- encrypted by TDE.

CREATE LOGIN EKM_Login

FROM ASYMMETRIC KEY CONTOSO_KEY;

GO

-- Alter the TDE Login to add the credential for use by the

-- Database Engine to access the key vault

ALTER LOGIN EKM_Login

ADD CREDENTIAL Azure_EKM_cred;

GO

2. Create the database encryption key that will be used for TDE.
SQL

USE ContosoDatabase;

GO

CREATE DATABASE ENCRYPTION KEY

WITH ALGORITHM = AES_128

ENCRYPTION BY SERVER ASYMMETRIC KEY CONTOSO_KEY;

GO

-- Alter the database to enable transparent data encryption.

ALTER DATABASE ContosoDatabase

SET ENCRYPTION ON;

GO

Encrypted backups
1. Create a SQL Server login to be used by the Database Engine for encrypting
backups, and add the credential to it.

SQL

USE master;

-- Create a SQL Server login associated with the asymmetric key

-- for the Database engine to use when it is encrypting the backup.

CREATE LOGIN EKM_Login

FROM ASYMMETRIC KEY CONTOSO_KEY;

GO

-- Alter the Encrypted Backup Login to add the credential for use by

-- the Database Engine to access the key vault

ALTER LOGIN EKM_Login

ADD CREDENTIAL Azure_EKM_cred ;

GO

2. Backup the database specifying encryption with the asymmetric key stored in the
key vault.

SQL

USE master;

BACKUP DATABASE [DATABASE_TO_BACKUP]

TO DISK = N'[PATH TO BACKUP FILE]'

WITH FORMAT, INIT, SKIP, NOREWIND, NOUNLOAD,

ENCRYPTION(ALGORITHM = AES_256, SERVER ASYMMETRIC KEY = [CONTOSO_KEY]);

GO

Column Level Encryption (CLE)


This script creates a symmetric key protected by the asymmetric key in the key vault,
and then uses the symmetric key to encrypt data in the database.

SQL

CREATE SYMMETRIC KEY DATA_ENCRYPTION_KEY

WITH ALGORITHM=AES_256

ENCRYPTION BY ASYMMETRIC KEY CONTOSO_KEY;

DECLARE @DATA VARBINARY(MAX);

--Open the symmetric key for use in this session

OPEN SYMMETRIC KEY DATA_ENCRYPTION_KEY

DECRYPTION BY ASYMMETRIC KEY CONTOSO_KEY;

--Encrypt syntax

SELECT @DATA = ENCRYPTBYKEY(KEY_GUID('DATA_ENCRYPTION_KEY'),


CONVERT(VARBINARY,'Plain text data to encrypt'));

-- Decrypt syntax

SELECT CONVERT(VARCHAR, DECRYPTBYKEY(@DATA));

--Close the symmetric key

CLOSE SYMMETRIC KEY DATA_ENCRYPTION_KEY;

Additional resources
For more information on how to use these encryption features, see Using EKM with SQL
Server Encryption Features.

Note that the steps in this article assume that you already have SQL Server running on
an Azure virtual machine. If not, see Provision a SQL Server virtual machine in Azure. For
other guidance on running SQL Server on Azure VMs, see SQL Server on Azure Virtual
Machines overview.

Next steps
For more security information, review Security considerations for SQL Server on Azure
VMs.
Migrate log disk to Ultra disk
Article • 08/31/2022

Applies to:
SQL Server on Azure VM

Azure ultra disks deliver high throughput, high IOPS, and consistently low latency disk
storage for SQL Server on Azure Virtual Machine (VM).

This article teaches you to migrate your log disk to an ultra SSD to take advantage of
the performance benefits offered by ultra disks.

Back up database
Complete a full backup up of your database.

Attach disk
Attach the Ultra SSD to your virtual machine once you have enabled ultradisk
compatibility on the VM.

Ultra disk is supported on a subset of VM sizes and regions. Before proceeding, validate
that your VM is in a region, zone, and size that supports ultra disk. You can determine
and validate VM size and region using the Azure CLI or PowerShell.

Enable compatibility
To enable compatibility, follow these steps:

1. Go to your virtual machine in the Azure portal .

2. Stop/deallocate the virtual machine.

3. Select Disks under Settings and then select Additional settings.


4. Select Yes to Enable Ultra disk compatibility.

5. Select Save.

Attach disk
Use the Azure portal to attach an ultra disk to your virtual machine. For details, see
Attach an ultra disk.

Once the disk is attached, start your VM once more using the Azure portal.

Format disk
Connect to your virtual machine and format your ultra disk.

To format your ultra disk, follow these steps:

1. Connect to your VM by using Remote Desktop Protocol (RDP).


2. Use Disk Management to format and partition your newly attached ultra disk.

Use disk for log


Configure SQL Server to use the new log drive. You can do so using Transact-SQL (T-
SQL) or SQL Server Management Studio (SSMS). The account used for the SQL Server
service account must have full control of the new log file location.

Configure permissions
1. Verify the service account used by SQL Server. You can do so by using SQL Server
Configuration Manager or Services.msc.
2. Navigate to your new disk.
3. Create a folder (or multiple folders) to be used for your log file.
4. Right-click the folder and select Properties.
5. On the Security tab, grant full control access to the SQL Server service account.
6. Select OK to save your settings.
7. Repeat this for every root-level folder where you plan to have SQL data.

Use new log drive


After permission has been granted, use either Transact-SQL (T-SQL) or SQL Server
Management Studio (SSMS) to detach the database and move existing log files to the
new location.

U Caution

Detaching the database will take it offline, closing connections and rolling back any
transactions that are in-flight. Proceed with caution and during a down-time
maintenance window.

Transact-SQL (T-SQL)

Use T-SQL to move the existing files to a new location:

1. Connect to your database in SQL Server Management Studio and open a New
Query window.

2. Get the existing files and locations:

SQL

USE AdventureWorks

GO

sp_helpfile

GO

3. Detach the database:

SQL

USE master

GO

sp_detach_db 'AdventureWorks'

GO

4. Use file explorer to move the log file to the new location on the ultra disk.

5. Attach the database, specifying the new file locations:

SQL

sp_attach_db 'AdventureWorks'

'E:\Fixed_FG\AdventureWorks.mdf',
'E:\Fixed_FG\AdventureWorks_2.ndf',

'F:\New_Log\AdventureWorks_log.ldf'

GO

At this point, the database comes online with the log in the new location.

Next steps
Review the performance best practices for additional settings to improve performance.

For an overview of SQL Server on Azure Virtual Machines, see the following articles:

Overview of SQL Server on Windows VMs


Overview of SQL Server on Linux VMs
Automatic registration with SQL IaaS
Agent extension
Article • 03/26/2023

Applies to:
SQL Server on Azure VM

By default, Azure VMs with SQL Server 2016 or later are automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service. You can enable the
automatic registration feature for your subscription to easily and automatically register
any SQL Server VMs not picked up by the CEIP service, such as older versions of SQL
Server.

This article teaches you to enable the automatic registration feature. Alternatively, you
can register a single VM, or register your VMs in bulk with the SQL IaaS Agent extension.

7 Note

SQL Server VMs deployed via the Azure marketplace after October 2022 have the
least privileged model enabled by default.
Management modes for the SQL IaaS
Agent extension were removed in March 2023.

Overview
Register your SQL Server VM with the SQL IaaS Agent extension to unlock a full feature
set of benefits.

By default, Azure VMs with SQL Server 2016 or later are automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service with limited
functionality. You can use the automatic registration feature to automatically register
any SQL Server VMs not identified by the CEIP service. The license type automatically
defaults to that of the VM image. If you use a pay-as-you-go image for your VM, then
your license type will be PAYG , otherwise your license type will be AHUB by default. For
information about privacy, see the SQL IaaS Agent extension privacy statements.

Once automatic registration is enabled for a subscription all current and future VMs that
have SQL Server installed are registered with the SQL IaaS Agent extension. This is done
by running a monthly job that detects whether or not SQL Server is installed on all the
unregistered VMs in the subscription. For unregistered VMs, the job installs the SQL IaaS
Agent extension binaries to the VM, then runs a one-time utility to check for the SQL
Server registry hive. If the SQL Server hive is detected, the virtual machine is registered
with the extension. If no SQL Server hive exists in the registry, the binaries are removed.

Automatic registration offers limited functionality of the extension, such as license


management. You can enable more features from the SQL virtual machines resource in
the Azure portal .

U Caution

If the SQL Server hive is not present in the registry, removing the binaries
might be impacted if there are resource locks in place.
If you deployed a SQL Server VM with a marketplace image which has the SQL
IaaS Agent extension preinstalled, and the extension is in a failed state or it
was removed, automatic registration checks the registry to see if SQL Server is
installed on the VM and then registers it with the extension.

Integration with centrally managed Azure


Hybrid Benefit
Centrally managed Azure Hybrid Benefit (CM-AHB) is a service that helps customers
optimize their Azure costs and use other benefits such as:

Move all pay-as-you-go (full price) SQL PaaS/IaaS workloads to take advantage of
your Azure Hybrid Benefits without have to individually configure them to enable
the benefit.
Ensure that all your SQL workloads are licensed in compliance with the existing
license agreements.
Separate the license compliance management roles from devops roles using RBAC
Take advantage of free business continuity by ensuring that your passive & disaster
recovery (DR) environments are properly identified.
Use MSDN licenses in Azure for non-production environments.

CM-AHB uses data provided by the SQL IaaS Agent extension to account for the
number of SQL Server licenses used by individual Azure VMs and provides
recommendations to the billing admin during the license assignment process. Using the
recommendations ensures that you get the maximum discount by using Azure Hybrid
Benefit. If your VMs aren't registered with the SQL IaaS Agent extension when CM-AHB
is enabled by your billing admin, the service won't receive the full usage data from your
Azure subscriptions and therefore the CM-AHB recommendations will be inaccurate.
) Important

If automatic registration is activated after CM-AHB is enabled, you run the risk of
unnecessary pay-as-you-go charges for your SQL Server on Azure VM workloads.
To mitigate this risk, adjust your license assignments in CM-AHB to account for the
additional usage that will be reported by the SQL IaaS Agent extension after auto-
registration. We published an open source tool that provides insights into the
utilization of SQL Server licenses, including the utilization by the SQL Servers on
Azure Virtual Machines that are not yet registered with the SQL IaaS Agent
extension.

Prerequisites
To enable automatic registration of your SQL Server VM with the extension, you'll need:

An Azure subscription .
The client credentials used to register the virtual machines to exist in any of the
following Azure roles: Virtual Machine contributor, Contributor, or Owner.

Once automatic registration is enabled, SQL Server VMs are registered if they:

Are deployed using an Azure Resource Model to a Windows Server 2008 R2 (or
later) virtual machine. Windows Server 2008 isn't supported.
Have SQL Server installed.
Are deployed to the public or Azure Government cloud. Other clouds aren't
currently supported.

7 Note

Automatic registration is supported for Ubuntu Linux VMs in Azure.

Enable automatic registration


To enable automatic registration of your SQL Server VMs in the Azure portal, follow
these steps:

1. Sign into the Azure portal .

2. Navigate to the SQL virtual machines resource page.


3. Select Automatic SQL Server VM registration to open the Automatic registration
page.

4. Choose your subscription from the drop-down.

5. Read through the terms and if you agree, select I accept.

6. Select Register to enable the feature and automatically register all current and
future SQL Server VMs with the SQL IaaS Agent extension. This won't restart the
SQL Server service on any of the VMs.

Disable automatic registration


Use the Azure CLI or Azure PowerShell to disable the automatic registration feature.
When the automatic registration feature is disabled, SQL Server VMs added to the
subscription need to be manually registered with the SQL IaaS Agent extension. This
won't unregister existing SQL Server VMs that have already been registered.

Azure CLI

To disable automatic registration using Azure CLI, run the following command:

Azure CLI

az feature unregister --namespace Microsoft.SqlVirtualMachine --name


BulkRegistration

Enable for multiple subscriptions


You can enable the automatic registration feature for multiple Azure subscriptions by
using PowerShell.

To do so, follow these steps:

1. Save this script .

2. Navigate to where you saved the script by using an administrative Command


Prompt or PowerShell window.

3. Connect to Azure ( az login ).

4. Execute the script, passing in SubscriptionIds as parameters. If no subscriptions are


specified, the script enables auto-registration for all the subscriptions in the user
account.

The following command enables auto-registration for two subscriptions:

Console

.\EnableBySubscription.ps1 -SubscriptionList a1a1a-aa11-11aa-a1a1-


a11a111a1,b2b2b2-bb22-22bb-b2b2-b2b2b2bb

The following command enables auto-registration for all subscriptions:

Console

.\EnableBySubscription.ps1

Failed registration errors are stored in RegistrationErrors.csv located in the same


directory where you saved and executed the .ps1 script from.

Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Manually register a single VM
Troubleshoot known issues with the extension.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.

For more information, review the following articles:


Overview of SQL Server on a Windows VM
FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Azure VMs
What's new for SQL Server on Azure VMs
Register Windows SQL Server VM with
SQL IaaS Agent extension
Article • 04/05/2023

Applies to:
SQL Server on Azure VM

Register your SQL Server VM with the SQL IaaS Agent extension to unlock a wealth of
feature benefits for your SQL Server on Azure Windows VM.

This article teaches you to register a single SQL Server VM with the SQL IaaS Agent
extension. Alternatively, you can register all SQL Server VMs in a subscription
automatically or multiple VMs in bulk using a script.

7 Note

SQL Server VMs deployed via the Azure marketplace after October 2022 have the
least privileged model enabled by default.
Management modes for the SQL IaaS
Agent extension were removed in March 2023.

Overview
Registering with the SQL Server IaaS Agent extension creates the SQL virtual machine
resource within your subscription, which is a separate resource from the virtual machine
resource. Unregistering your SQL Server VM from the extension removes the SQL virtual
machine resource but won't drop the actual virtual machine.

Deploying a SQL Server VM Azure Marketplace image through the Azure portal
automatically registers the SQL Server VM with the extension. However, if you choose to
self-install SQL Server on an Azure virtual machine, or provision an Azure virtual
machine from a custom VHD, then you must register your SQL Server VM with the SQL
IaaS Agent extension to unlock full feature benefits and manageability. By default, Azure
VMs that have SQL Server 2016 or later installed will be automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service. See the SQL Server
privacy supplement for more information. For information about privacy, see the SQL
IaaS Agent extension privacy statements.

To utilize the SQL IaaS Agent extension, you must first register your subscription with
the Microsoft.SqlVirtualMachine provider, which gives the SQL IaaS Agent extension
the ability to create resources within that specific subscription. Then you can register
your SQL Server VM with the extension.

Prerequisites
To register your SQL Server VM with the extension, you'll need:

An Azure subscription .
An Azure Resource Model Windows Server 2008 (or greater) virtual machine with
SQL Server 2008 (or greater) deployed to the public or Azure Government cloud.
The client credentials used to register the virtual machine exists in any of the
following Azure roles: Virtual Machine contributor, Contributor, or Owner.
The latest version of Azure CLI or Azure PowerShell (5.0 minimum).
A minimum of .NET Framework 4.5.1 or later.
To verify that none of the limitations apply to you.

Register subscription with RP


To register your SQL Server VM with the SQL IaaS Agent extension, you must first
register your subscription with the Microsoft.SqlVirtualMachine resource provider (RP).
This gives the SQL IaaS Agent extension the ability to create resources within your
subscription. You can do so by using the Azure portal, the Azure CLI, or Azure
PowerShell.

Azure portal

Register your subscription with the resource provider by using the Azure portal:

1. Open the Azure portal and go to All Services.

2. Go to Subscriptions and select the subscription of interest.

3. On the Subscriptions page, select Resource providers under Settings.

4. Enter sql in the filter to bring up the SQL-related resource providers.

5. Select Register, Re-register, or Unregister for the


Microsoft.SqlVirtualMachine provider, depending on your desired action.
Register with extension
You can manually register your SQL Server VM with the SQL IaaS Agent extension by
using Azure PowerShell or the Azure CLI.

Provide the SQL Server license type as either pay-as-you-go ( PAYG ) to pay per usage,
Azure Hybrid Benefit ( AHUB ) to use your own license, or disaster recovery ( DR ) to
activate the free DR replica license.

Azure portal

It's not currently possible to register your SQL Server VM with the SQL IaaS Agent
extension by using the Azure portal.

Verify registration status


You can verify if your SQL Server VM has already been registered with the SQL IaaS
Agent extension by using the Azure portal, the Azure CLI, or Azure PowerShell.

Azure portal

Verify the registration status with the Azure portal:

1. Sign in to the Azure portal .

2. Go to your SQL Server VMs.

3. Select your SQL Server VM from the list. If your SQL Server VM isn't listed
here, it likely hasn't been registered with the SQL IaaS Agent extension.
4. View the value under Status. If Status is Succeeded, then the SQL Server VM
has been registered with the SQL IaaS Agent extension successfully.

Alternatively, you can check the status by choosing Repair under the Support +
troubleshooting pane in the SQL virtual machine resource. The provisioning state
for the SQL IaaS Agent extension can be Succeeded or Failed.

An error indicates that the SQL Server VM hasn't been registered with the extension.

Unregister from extension


To unregister your SQL Server VM with the SQL IaaS Agent extension, delete the SQL
virtual machine resource using the Azure portal or Azure CLI. Deleting the SQL virtual
machine resource doesn't delete the SQL Server VM.

U Caution

Use extreme caution when unregistering your SQL Server VM from the extension.
Follow the steps carefully because it is possible to inadvertently delete the virtual
machine when attempting to remove the resource.

Azure portal

Unregister your SQL Server VM from the extension using the Azure portal:

1. Sign into the Azure portal .

2. Navigate to the SQL VM resource.


3. Select Delete.

4. Type the name of the SQL virtual machine and clear the check box next to the
virtual machine.
2 Warning

Failure to clear the checkbox next to the virtual machine name will delete
the virtual machine entirely. Clear the checkbox to unregister the SQL
Server VM from the extension but not delete the actual virtual machine.

5. Select Delete to confirm the deletion of the SQL virtual machine resource, and
not the SQL Server VM.

Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Automatically register all VMs in a subscription.
Troubleshoot known issues with the extension.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.

To learn more, review the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Azure VMs
What's new for SQL Server on Azure VMs
Register multiple SQL VMs in Azure with
the SQL IaaS Agent extension
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

This article describes how to register your SQL Server virtual machines (VMs) in bulk in
Azure with the SQL IaaS Agent extension by using the Register-SqlVMs Azure PowerShell
cmdlet.

Alternatively, you can register all SQL Server VMs automatically or individual SQL Server
VMs manually.

7 Note

SQL Server VMs deployed via the Azure marketplace after October 2022 have the
least privileged model enabled by default.
Management modes for the SQL IaaS
Agent extension were removed in March 2023.

Overview
The Register-SqlVMs cmdlet can be used to register all virtual machines in a given list of
subscriptions, resource groups, or a list of specific virtual machines. The cmdlet will
register the virtual machines and then generate both a report and a log file.

The registration process carries no risk, has no downtime, and will not restart the SQL
Server service or the virtual machine.

By default, Azure VMs with SQL Server 2016 or later are automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service. You can use bulk
registration to register any SQL Server VMs that are not detected by the CEIP service.

For information about privacy, see the SQL IaaS Agent extension privacy statements.

Prerequisites
To register your SQL Server VM with the extension, you'll need the following:

An Azure subscription that has been registered with the


Microsoft.SqlVirtualMachine resource provider and contains unregistered SQL
Server virtual machines.
The client credentials used to register the virtual machines exist in any of the
following Azure roles: Virtual Machine contributor, Contributor, or Owner.
Az PowerShell 5.0 - versions higher than 5.0 currently only support MFA and are
not compatible with the script to register multiple VMs.

Get started
Before proceeding, you must first create a local copy of the script, import it as a
PowerShell module, and connect to Azure.

Create the script


To create the script, copy the full script from the end of this article and save it locally as
RegisterSqlVMs.psm1 .

Import the script


After the script is created, you can import it as a module in the PowerShell terminal.

Open an administrative PowerShell terminal and navigate to where you saved the
RegisterSqlVMs.psm1 file. Then, run the following PowerShell cmdlet to import the script
as a module:

PowerShell

Import-Module .\RegisterSqlVMs.psm1

Connect to Azure
Use the following PowerShell cmdlet to connect to Azure:

PowerShell

Connect-AzAccount

All VMs in a list of subscriptions


Use the following cmdlet to register all SQL Server virtual machines in a list of
subscriptions:
PowerShell

Register-SqlVMs -SubscriptionList SubscriptionId1,SubscriptionId2

Example output:

Number of subscriptions registration failed for

because you do not have access or credentials are wrong: 1

Total VMs Found: 10

VMs Already registered: 1

Number of VMs registered successfully: 4

Number of VMs failed to register due to error: 1

Number of VMs skipped as VM or the guest agent on VM is not running: 3

Number of VMs skipped as they are not running SQL Server On Windows: 1

Please find the detailed report in file


RegisterSqlVMScriptReport1571314821.txt

Please find the error details in file


VMsNotRegisteredDueToError1571314821.log

All VMs in a single subscription


Use the following cmdlet to register all SQL Server virtual machines in a single
subscription:

PowerShell

Register-SqlVMs -Subscription SubscriptionId1

Example output:

Total VMs Found: 10

VMs Already registered: 1

Number of VMs registered successfully: 5

Number of VMs failed to register due to error: 1

Number of VMs skipped as VM or the guest agent on VM is not running: 2

Number of VMs skipped as they are not running SQL Server On Windows: 1

Please find the detailed report in file


RegisterSqlVMScriptReport1571314821.txt

Please find the error details in file


VMsNotRegisteredDueToError1571314821.log

All VMs in multiple resource groups


Use the following cmdlet to register all SQL Server virtual machines in multiple resource
groups within a single subscription:

PowerShell

Register-SqlVMs -Subscription SubscriptionId1 -ResourceGroupList


ResourceGroup1,ResourceGroup2

Example output:

Total VMs Found: 4

VMs Already registered: 1

Number of VMs registered successfully: 1

Number of VMs failed to register due to error: 1

Number of VMs skipped as they are not running SQL Server On Windows: 1

Please find the detailed report in file


RegisterSqlVMScriptReport1571314821.txt

Please find the error details in file


VMsNotRegisteredDueToError1571314821.log

All VMs in a resource group


Use the following cmdlet to register all SQL Server virtual machines in a single resource
group:

PowerShell

Register-SqlVMs -Subscription SubscriptionId1 -ResourceGroupName


ResourceGroup1

Example output:

Total VMs Found: 4

VMs Already registered: 1

Number of VMs registered successfully: 1

Number of VMs failed to register due to error: 1

Number of VMs skipped as VM or the guest agent on VM is not running: 1

Please find the detailed report in file


RegisterSqlVMScriptReport1571314821.txt

Please find the error details in file


VMsNotRegisteredDueToError1571314821.log

Specific VMs in a single resource group


Use the following cmdlet to register specific SQL Server virtual machines within a single
resource group:

PowerShell

Register-SqlVMs -Subscription SubscriptionId1 -ResourceGroupName


ResourceGroup1 -VmList VM1,VM2,VM3

Example output:

Total VMs Found: 3

VMs Already registered: 0

Number of VMs registered successfully: 1

Number of VMs skipped as VM or the guest agent on VM is not running: 1

Number of VMs skipped as they are not running SQL Server On Windows: 1

Please find the detailed report in file


RegisterSqlVMScriptReport1571314821.txt

Please find the error details in file


VMsNotRegisteredDueToError1571314821.log

A specific VM
Use the following cmdlet to register a specific SQL Server virtual machine:

PowerShell

Register-SqlVMs -Subscription SubscriptionId1 -ResourceGroupName


ResourceGroup1 -Name VM1

Example output:

Total VMs Found: 1

VMs Already registered: 0

Number of VMs registered successfully: 1

Please find the detailed report in file


RegisterSqlVMScriptReport1571314821.txt

Output description
Both a report and a log file are generated every time the Register-SqlVMs cmdlet is
used.

Report
The report is generated as a .txt file named
RegisterSqlVMScriptReport<Timestamp>.txt where the timestamp is the time when the

cmdlet was started. The report lists the following details:

Output value Description

Number of subscriptions This provides the number and list of subscriptions that had issues
registration failed for with the provided authentication. The detailed error can be found in
because you do not have the log by searching for the subscription ID.
access or credentials are
incorrect

Number of subscriptions This section contains the count and list of subscriptions that have not
that could not be tried been registered to the SQL IaaS Agent extension.
because they are not
registered to the resource
provider

Total VMs found The count of virtual machines that were found in the scope of the
parameters passed to the cmdlet.

VMs already registered The count of virtual machines that were skipped as they were already
registered with the extension.

Number of VMs The count of virtual machines that were successfully registered after
registered successfully running the cmdlet. Lists the registered virtual machines in the
format SubscriptionID, Resource Group, Virtual Machine .

Number of VMs failed to Count of virtual machines that failed to register due to some error.
register due to error The details of the error can be found in the log file.

Number of VMs skipped Count and list of virtual machines that could not be registered as
as the VM or the gust either the virtual machine or the guest agent on the virtual machine
agent on VM is not were not running. These can be retried once the virtual machine or
running guest agent has been started. Details can be found in the log file.
Output value Description

Number of VMs skipped Count of virtual machines that were skipped as they are not running
as they are not running SQL Server or are not a Windows virtual machine. The virtual
SQL Server on Windows machines are listed in the format SubscriptionID, Resource Group,
Virtual Machine .

Log
Errors are logged in the log file named VMsNotRegisteredDueToError<Timestamp>.log ,
where timestamp is the time when the script started. If the error is at the subscription
level, the log contains the comma-separated Subscription ID and the error message. If
the error is with the virtual machine registration, the log contains the Subscription ID,
Resource group name, virtual machine name, error code, and message separated by
commas.

Remarks
When you register SQL Server VMs with the extension by using the provided script,
consider the following:

Registration with the extension requires a guest agent running on the SQL Server
VM. Windows Server 2008 images do not have a guest agent, so these virtual
machines will fail and must be registered manually with limited functionality.
There is retry logic built-in to overcome transparent errors. If the virtual machine is
successfully registered, then it is a rapid operation. However, if the registration
fails, then each virtual machine will be retried. As such, you should allow significant
time to complete the registration process - though actual time requirement is
dependent on the type and number of errors.

Full script
For the full script on GitHub, see Bulk register SQL Server VMs with Az PowerShell .

Copy the full script and save it as RegisterSqLVMs.psm1 .

Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Manually register a single VM
Automatically register all VMs in a subscription.
Troubleshoot known issues with the extension.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.

To learn more, review the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Azure VMs
What's new for SQL Server on Azure VMs
Known issues and troubleshooting the
SQL Server IaaS agent extension
Article • 06/28/2023

Applies to:
SQL Server on Azure VM

This article helps you resolve known issues and troubleshoot errors when using the SQL
Server IaaS agent extension.

For answers to frequently asked questions about the extension, check out the FAQ.

Check prerequisites
To avoid errors due to unsupported options or limitations, verify the prerequisites for
the extension.

If you repair, or reinstall the SQL IaaS Agent extension, your setting won't be preserved,
other than licensing changes. If you've repaired or reinstalled the extension, you'll have
to reconfigure automated backup, automated patching, and any other services you had
configured prior to the repair or reinstall.

Repair extension
It's possible for your SQL IaaS Agent extension to be in a failed state. Use the Azure
portal to repair the SQL IaaS Agent extension.

To repair the extension with the Azure portal:

1. Sign in to the Azure portal .

2. Go to your SQL virtual machines resource.

3. Select your SQL Server VM from the list. If your SQL Server VM isn't listed here, it
likely hasn't been registered with the SQL IaaS Agent extension.

4. Select SQL IaaS Agent Extension Settings under Help.

5. If your provisioning state shows as Failed, choose Repair to repair the extension. If
your state is Succeeded you can check the box next to Force repair to repair the
extension regardless of state.
SQL IaaS Agent extension registration fails with
error "Creating SQL Virtual Machine resource
for PowerBI VM images is not supported"
Note that SQL IaaS Agent extension registration is blocked and not supported on
PowerBI VM, SQL Server Reporting Server and SQL Server Analysis Service Images
deployed from Azure Marketplace.

Not valid state for management


Repair the extension if you see the following error message:

The SQL virtual machines resource is not in a valid state for management

Underlying virtual machine is invalid


If you see the following error message:

SQL management operations are disabled because the state of underlying virtual

machine is invalid

Consider the following:

The SQL VM may be stopped, deallocated, in a failed state, or not found. Validate
the underlying virtual machine is running.
Your SQL IaaS Agent extension may be in a failed state. Repair the extension.

Unregister your SQL VM from the extension and then register the SQL VM with the
extension again if you did any of the following:
Migrated your VM from one subscription to the other.
Changed the locale or collation of SQL Server.
Changed the version of your SQL Server instance.
Changed the edition of your SQL Server instance.

Provisioning failed
Repair the extension if the SQL IaaS Agent extension status shows as Provisioning failed
in the Azure portal.

SQL VM resource unavailable in portal


If the SQL IaaS Agent extension is installed, and the VM is online, but the SQL VM
resource is unavailable in the Azure portal, verify that your SQL Server and SQL Browser
service are started within the VM. If this doesn't resolve the issue, repair the extension.

Features are grayed out


If you navigate to your SQL VM resource in the Azure portal, and there are features that
are grayed out, verify that the SQL VM is running, and that you have the latest version of
the SQL IaaS Agent extension.

Change service account


Changing the service accounts for either of the two services associated with the
extension can cause the extension to fail or behave unpredictably.

The two services should run under the following accounts:

Microsoft SQL Server IaaS agent is the main service for the SQL IaaS Agent
extension and should run under the Local System account.
Microsoft SQL Server IaaS Query Service is a helper service that helps the
extension run queries within SQL Server and should run under the NT Service
account NT Service\SqlIaaSExtensionQuery .

Automatic registration failed


If you have a few SQL Server VMs that failed to register automatically, check the version
of SQL Server on the VMs that failed to register. By default, Azure VMs with SQL Server
2016 or later are automatically registered with the SQL IaaS Agent extension when
detected by the CEIP service. SQL Server VMs that have versions earlier than 2016 have
to be manually registered individually or in bulk.

High resource consumption


If you notice that the SQL IaaS Agent extension is consuming unexpectedly high CPU or
memory, verify the extension is on the latest version. If so, restart Microsoft SQL Server
IaaS Agent from services.msc .

Can't extend disks


Extending your disks from the Storage Configuration page of the SQL VM resource is
unavailable under the following conditions:

If you uninstall and reinstall the SQL IaaS Agent extension.


If you uninstall and reinstall your instance of SQL Server.
If you used custom naming conventions for the disk/storage pool name when
deploying your SQL Server image from the Azure Marketplace.

Disk configuration grayed out during


deployment
If you create your SQL Server VM by using an unmanaged disk, disk configuration is
grayed out by design.

Automated backup disabled


If your SQL VM resource displays Automated backup is currently disabled, check to see
if your SQL Server instance has managed backups enabled. To use Automated backups
from the Azure portal, disable managed backups in SQL Server.

Extension stuck in transition


Your SQL IaaS Agent extension may get stuck in a transitioning state in the following
scenarios:

You've removed the NT service\SQLIaaSExtension service from the SQL Server


logins and/or the local administrator's group.
Either of these two services are stopped in services.msc
Microsoft SQL Server IaaS Agent
Microsoft SQL Server IaaS Query Service

Fails to install on domain controller


Registering your SQL Server instance installed to your domain controller with the SQL
IaaS Agent extension isn't supported. Registering with the extension creates the user NT
Service\SQLIaaSExtension and since this user can't be created on the domain controller,

registering this VM with the SQL IaaS Agent isn't supported.

Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Manually register a single VM
Automatically register all VMs in a subscription.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.

To learn more, review the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Azure VMs
What's new for SQL Server on Azure VMs
Move a SQL Server VM to another
region within Azure with Azure Site
Recovery
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

This article teaches you how to use Azure Site Recovery to migrate your SQL Server
virtual machine (VM) from one region to another within Azure.

Moving a SQL Server VM to a different region requires doing the following:

1. Preparing: Confirm that both your source SQL Server VM and target region are
adequately prepared for the move.
2. Configuring: Moving your SQL Server VM requires that it is a replicated object
within the Azure Site Recovery vault. You need to add your SQL Server VM to the
Azure Site Recovery vault.
3. Testing: Migrating the SQL Server VM requires failing it over from the source
region to the replicated target region. To ensure that the move process will
succeed, you need to first test that your SQL Server VM can successfully fail over to
the target region. This will help expose any issues and avoid them when
performing the actual move.
4. Moving: Once your test failover passed, and you know that you are safe to migrate
your SQL Server VM, you can perform the move of the VM to the target region.
5. Cleaning up: To avoid billing charges, remove the SQL Server VM from the vault,
and any unnecessary resources that are left over in the resource group.

Verify prerequisites
Confirm that moving from your source region to your target region is supported.
Review the scenario architecture and components as well as the support limitations
and requirements.
Verify account permissions. If you created your free Azure account, you're the
administrator of your subscription. If you're not the subscription administrator,
work with the administrator to assign the permissions that you need. To enable
replication for a VM and copy data using Azure Site Recovery, you must have:
Permissions to create a VM. The Virtual Machine Contributor built-in role has
these permissions, which include:
Permissions to create a VM in the selected resource group.
Permissions to create a VM in the selected virtual network.
Permissions to write to the selected storage account.
Permissions to manage Azure Site Recovery operations. The Site Recovery
Contributor role has all the permissions that are required to manage Site
Recovery operations in a Recovery Services vault.
Moving the SQL virtual machines resource is not supported. You need to reinstall
the SQL IaaS Agent extension on the target region where you have planned your
move. If you are moving your resources between subscriptions or tenants, make
sure you've registered your subscription with the resource provider before
attempting to register your migrated SQL Server VM with the SQL IaaS Agent
extension.

Prepare to move
Prepare both the source SQL Server VM and the target region for the move.

Prepare the source SQL Server VM


Ensure that all the latest root certificates are on the SQL Server VM that you want
to move. If the latest root certificates are not there, security constraints will prevent
data copy to the target region.
For Windows VMs, install all of the latest Windows updates on the VM, so that all
the trusted root certificates are on the machine. In a disconnected environment,
follow the standard Windows Update and certificate update process for your
organization.
For Linux VMs, follow the guidance provided by your Linux distributor to get the
latest trusted root certificates and certificate revocation list on the VM.
Make sure you're not using an authentication proxy to control network
connectivity for the VMs that you want to move.
If the VM that you're trying to move doesn't have access to the internet, or it's
using a firewall proxy to control outbound access, check the requirements.
Identify the source networking layout and all the resources that you're currently
using. This includes but isn't limited to load balancers, network security groups
(NSGs), and public IPs.

Prepare the target region


Verify that your Azure subscription allows you to create VMs in the target region
that's used for disaster recovery. Contact support to enable the required quota.
Make sure that your subscription has enough resources to support VMs with size
that match your source VMs. If you're using Site Recovery to copy data to the
target, Site Recovery chooses the same size, or the closest possible size for the
target VM.
Make sure that you create a target resource for every component that's identified
in the source networking layout. This step is important to ensure that your VMs
have all the functionality and features in the target region that you had in the
source region.
Azure Site Recovery automatically discovers and creates a virtual network when
you enable replication for the source VM. You can also pre-create a network and
assign it to the VM in the user flow for enabling replication. You need to
manually create any other resources in the target region.
To create the most commonly used network resources that are relevant for you
based on the source VM configuration, see the following documentation:
Network security groups
Load balancer
Public IP address
For any additional networking components, see the networking documentation.
Manually create a non-production network in the target region if you want to test
the configuration before you perform the final move to the target region. We
recommend this step because it ensures minimal interference with the production
network.

Configure Azure Site Recovery vault


The following steps show you how to use Azure Site Recovery to copy data to the target
region. Create the Recovery Services vault in any region other than the source region.

1. Sign in to the Azure portal .

2. Choose to Create a resource from the upper-left hand corner of the navigation
pane.

3. Select IT & Management tools and then select Backup and Site Recovery.

4. On the Basics tab, under Project details, either create a new resource group in the
target region, or select an existing resource group in the target region.

5. Under Instance Details, specify a name for your vault, and then select your target
Region from the drop-down.

6. Select Review + Create to create your Recovery Services vault.


7. Select All services from the upper-left hand corner of the navigation pane and in
the search box type recovery services .

8. (Optionally) Select the star next to Recovery Services vaults to add it to your quick
navigation bar.

9. Select Recovery services vaults and then select the Recovery Services vault you
created.

10. On the Overview pane, select Replicate.

11. Select Source and then select Azure as the source. Select the appropriate values
for the other drop-down fields, such as the location for your source VMs. Only
resources groups located in the Source location region will be visible in the Source
resource group field.

12. Select Virtual machines and then choose the virtual machines you want to
migrate. Select OK to save your VM selection.

13. Select Settings, and then choose your Target location from the drop-down. This
should be the resource group you prepared earlier.

14. Once you have customized replication, select Create target resources to create the
resources in the new location.

15. Once resource creation is complete, select Enable replication to start replication of
your SQL Server VM from the source to the target region.

16. You can check the status of replication by navigating to your recovery vault,
selecting Replicated items and viewing the Status of your SQL Server VM. A status
of Protected indicates that replication has completed.
Test move process
The following steps show you how to use Azure Site Recovery to test the move process.

1. Navigate to your Recovery Services vault in the Azure portal and select
Replicated items.

2. Select the SQL Server VM you would like to move, verify that the Replication
Health shows as Healthy and then select Test Failover.
3. On the Test Failover page, select the Latest app-consistent recovery point to use
for the failover, as that is the only type of snapshot that can guarantee SQL Server
data consistency.

4. Select the virtual network under Azure virtual network and then select OK to test
failover.

) Important

We recommend that you use a separate Azure VM network for the failover
test. Don't use the production network that was set up when you enabled
replication and that you want to move your VMs into eventually.

5. To monitor progress, navigate to your vault, select Site Recovery jobs under
Monitoring, and then select the Test failover job that's in progress.

6. Once the test completes, navigate to Virtual machines in the portal and review the
newly created virtual machine. Make sure the SQL Server VM is running, is sized
appropriately, and is connected to the appropriate network.

7. Delete the VM that was created as part of the test, as the Failover option will be
grayed out until the failover test resources are cleaned up. Navigate back to the
vault, select Replicated items, select the SQL Server VM, and then select Cleanup
test failover. Record and save any observations associated with the test in the
Notes section and select the checkbox next to Testing is complete. Delete test
failover virtual machines. Select OK to clean up resources after the test.
Move the SQL Server VM
The following steps show you how to move the SQL Server VM from your source region
to your target region.

1. Navigate to the Recovery Services vault, select Replicated items, select the VM,
and then select Failover.

2. Select the latest app-consistent recover point under Recovery Point.

3. Select the check box next to Shut down the machine before beginning failover.
Site Recovery will attempt to shut down the source VM before triggering the
failover. Failover will continue even if shut down fails.

4. Select OK to start the failover.

5. You can monitor the failover process from the same Site Recovery jobs page you
viewed when monitoring the failover test in the previous section.
6. After the job completes, check that the SQL Server VM appears in the target region
as expected.

7. Navigate back to the vault, select Replicated Items, select the SQL Server VM, and
select Commit to finish the move process to the target region. Wait until the
commit job finishes.

8. Register your SQL Server VM with the SQL IaaS Agent extension to enable SQL
virtual machine manageability in the Azure portal and features associated with the
extension. For more information, see Register SQL Server VM with the SQL IaaS
Agent extension.

2 Warning

SQL Server data consistency is only guaranteed with app-consistent snapshots. The
latest processed snapshot can't be used for SQL Server failover as a crash recovery
snapshot can't guarantee SQL Server data consistency.

Clean up source resources


To avoid billing charges, remove the SQL Server VM from the vault, and delete any
unnecessary associated resources.

1. Navigate back to the Site Recovery vault, select Replicated items, and select the
SQL Server VM.

2. Select Disable Replication. Select a reason for disabling protection, and then select
OK to disable replication.

) Important

It is important to perform this step to avoid being charged for Azure Site
Recovery replication.

3. If you have no plans to reuse any of the resources in the source region, delete all
relevant network resources, and corresponding storage accounts.

Next steps
For more information, see the following articles:
Overview of SQL Server on a Windows VM
SQL Server on a Windows VM FAQ
SQL Server on a Windows VM pricing guidance
What's new for SQL Server on Azure VMs
Configure cluster quorum for SQL
Server on Azure VMs
Article • 11/09/2022

Applies to:
SQL Server on Azure VM

This article teaches you to configure one of the three quorum options for a Windows
Server Failover Cluster running on SQL Server on Azure Virtual Machines (VMs) - a disk
witness, a cloud witness, and a file share witness.

Overview
The quorum for a cluster is determined by the number of voting elements that must be
part of active cluster membership for the cluster to start properly or continue running.
Configuring a quorum resource allows a two-node cluster to continue with only one
node online. The Windows Server Failover Cluster is the underlying technology for the
SQL Server on Azure VMs high availability options: failover cluster instances (FCIs) and
availability groups (AGs).

The disk witness is the most resilient quorum option, but to use a disk witness on a SQL
Server on Azure VM, you must use an Azure shared disk which imposes some limitations
to the high availability solution. As such, use a disk witness when you're configuring
your failover cluster instance with Azure shared disks, otherwise use a cloud witness
whenever possible. If you are using Windows Server 2012 R2 or older which does not
support cloud witness, you can use a file share witness.

The following quorum options are available to use for SQL Server on Azure VMs:

Cloud witness Disk witness File share witness

Supported OS Windows Server 2016+ All All

To learn more about quorum, see the Windows Server Failover Cluster overview.

Cloud witness
A cloud witness is a type of failover cluster quorum witness that uses Microsoft Azure
storage to provide a vote on cluster quorum.
The following table provides additional information and considerations about the cloud
witness:

Witness Description Requirements and recommendations


type

Cloud Uses Azure storage as Default size is 1 MB.


witness the cloud witness, Use General Purpose for the account kind.
contains just the time Blob storage is not supported.
stamp. Use Standard storage. Azure Premium Storage
Ideal for deployments in is not supported.
multiple sites, multiple Failover Clustering uses the blob file as the
zones, and multiple arbitration point, which requires some
regions. consistency guarantees when reading the data.
Creates well-known Therefore you must select Locally redundant
container msft-cloud- storage for Replication type.
witness under the Should be excluded from backups and
Microsoft Storage antivirus scanning
Account. A Disk witness isn't supported with Storage
Writes a single blob file Spaces Direct
with corresponding Cloud Witness uses HTTPS (default port 443)
cluster's unique ID used to establish communication with Azure Blob
as the file name of the Storage. Ensure that HTTPS port is accessible
blob file under the via network Proxy.
container

When configuring a Cloud Witness quorum resource for your Failover Cluster, consider:

Instead of storing the Access Key, your Failover Cluster will generate and securely
store a Shared Access Security (SAS) token.
The generated SAS token is valid as long as the Access Key remains valid. When
rotating the Primary Access Key, it is important to first update the Cloud Witness
(on all your clusters that are using that Storage Account) with the Secondary
Access Key before regenerating the Primary Access Key.
Cloud Witness uses HTTPS REST interface of the Azure Storage Account service.
This means it requires the HTTPS port to be open on all cluster nodes.

A cloud witness requires an Azure Storage Account. To configure a storage account,


follow these steps:

1. Sign in to the Azure portal .


2. On the Hub menu, select New -> Data + Storage -> Storage account.
3. In the Create a storage account page, do the following:
a. Enter a name for your storage account. Storage account names must be
between 3 and 24 characters in length and may contain numbers and lowercase
letters only. The storage account name must also be unique within Azure.
b. For Account kind, select General purpose.
c. For Performance, select Standard.
d. For Replication, select Local-redundant storage (LRS).

Once your storage account is created, follow these steps to configure your cloud witness
quorum resource for your failover cluster:

PowerShell

The existing Set-ClusterQuorum PowerShell command has new parameters


corresponding to Cloud Witness.

You can configure cloud witness with the cmdlet Set-ClusterQuorum using the
PowerShell command:

PowerShell

Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -


AccessKey <StorageAccountAccessKey>

In the rare instance you need to use a different endpoint, use this PowerShell
command:

PowerShell

Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -


AccessKey <StorageAccountAccessKey> -Endpoint <servername>

See the cloud witness documentation for help for finding the Storage Account
AccessKey.

Disk witness
A disk witness is a small clustered disk in the Cluster Available Storage group. This disk is
highly available and can fail over between nodes.

The disk witness is the recommended quorum option when used with a shared storage
high availability solution, such as the failover cluster instance with Azure shared disks.

The following table provides additional information and considerations about the
quorum disk witness:
Witness Description Requirements and recommendations
type

Disk Dedicated LUN that stores a Size of LUN must be at least 512 MB
witness copy of the cluster database Must be dedicated to cluster use and
Most useful for clusters with not assigned to a clustered role
shared (not replicated) storage Must be included in clustered storage
and pass storage validation tests
Can't be a disk that is a Cluster Shared
Volume (CSV)
Basic disk with a single volume
Doesn't need to have a drive letter
Can be formatted with NTFS or ReFS
Can be optionally configured with
hardware RAID for fault tolerance
Should be excluded from backups and
antivirus scanning
A Disk witness isn't supported with
Storage Spaces Direct

To use an Azure shared disk for the disk witness, you must first create the disk and
mount it. To do so, follow the steps in the Mount disk section of the Azure shared disk
failover cluster instance guide. The disk does not need to be premium.

After your disk has been mounted, add it to the cluster storage with the following steps:

1. Open Failover Cluster Manager.


2. Select Disks under Storage on the left navigation pane.
3. Select Add Disk under Actions on the right navigation pane.
4. Select the Azure shared drive you just mounted and note the name, such as
Cluster Disk 3 .

After your disk has been added as clustered storage, configure it as the disk witness
using PowerShell:

The existing Set-ClusterQuorum PowerShell command has new parameters


corresponding to Cloud Witness.

Use the path for the file share as the parameter for the disk witness when using the
PowerShell cmdlet Set-ClusterQuorum:

PowerShell

Set-ClusterQuorum -NodeAndDiskMajority "Cluster Disk 3"

You can also use the Failover Cluster manager; follow the same steps as for the cloud
witness, but choose the disk witness as the quorum option instead.

File share witness


A file share witness is an SMB file share that's typically configured on a file server
running Windows Server. It maintains clustering information in a witness.log file, but
doesn't store a copy of the cluster database. In Azure, you can configure a file share on
a separate virtual machine.

Configure a file share witness if a disk witness or a cloud witness are unavailable or
unsupported in your environment.

The following table provides additional information and considerations about the
quorum file share witness:

Witness Description Requirements and recommendations


type

File SMB file Must have a minimum of 5 MB of free space


share share that Must be dedicated to the single cluster and not used to
witness is store user or application data
configured Must have write permissions enabled for the computer
on a file object for the cluster name
server
running
Windows The following are additional considerations for a file server that
Server hosts the file share witness:
Does not
A single file server can be configured with file share
store a
witnesses for multiple clusters.
copy of the
The file server must be on a site that is separate from the
cluster
cluster workload. This allows equal opportunity for any
database
cluster site to survive if site-to-site network communication
Maintains
is lost. If the file server is on the same site, that site becomes
cluster
the primary site, and it is the only site that can reach the file
information
share.
only in a
The file server can run on a virtual machine if the virtual
witness.log
machine is not hosted on the same cluster that uses the file
file
share witness.
Most useful
For high availability, the file server can be configured on a
for
separate failover cluster.
multisite
clusters
with
replicated
storage
Once you have created your file share and properly configured permissions, mount the
file share to your clustered nodes. You can follow the same general steps to mount the
file share as described in the mount file share section of the premium file share failover
cluster instance how-to guide.

After your file share has been properly configured and mounted, use PowerShell to add
the file share as the quorum witness resource:

PowerShell

Set-ClusterQuorum -FileShareWitness <UNC path to file share> -Credential


$(Get-Credential)

You will be prompted for an account and password for a local (to the file share) non-
admin account that has full admin rights to the share. The cluster will keep the name
and password encrypted and not accessible by anyone.

You can also use the Failover Cluster manager; follow the same steps as for the cloud
witness, but choose the file share witness as the quorum option instead.

Change quorum voting


It's possible to change the quorum vote of a node participating in a Windows Server
Failover Cluster.

When modifying the node vote settings, follow these guidelines:

Qurom voting guidelines

Start with each node having no vote by default. Each node should only have a vote with explicit
justification.

Enable votes for cluster nodes that host the primary replica of an availability group, or the
preferred owners of a failover cluster instance.

Enable votes for automatic failover owners. Each node that may host a primary replica or FCI as a
result of an automatic failover should have a vote.

If an availability group has more than one secondary replica, only enable votes for the replicas
that have automatic failover.

Disable votes for nodes that are in secondary disaster recovery sites. Nodes in secondary sites
should not contribute to the decision of taking a cluster offline if there's nothing wrong with the
primary site.
Qurom voting guidelines

Have an odd number of votes, with three quorum votes minimum. Add a quorum witness for an
additional vote if necessary in a two-node cluster.

Reassess vote assignments post-failover. You don't want to fail over into a cluster configuration
that doesn't support a healthy quorum.

Next Steps
To learn more, see:

HADR settings for SQL Server on Azure VMs


Windows Server Failover Cluster with SQL Server on Azure VMs
Always On availability groups with SQL Server on Azure VMs
Windows Server Failover Cluster with SQL Server on Azure VMs
Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
Automated Backup for Azure virtual
machines (Resource Manager)
Article • 06/27/2023

Applies to:
SQL Server on Azure VM

Automated Backup automatically configures Managed Backup to Microsoft Azure for all
existing and new databases on an Azure VM running SQL Server 2016 or later Standard,
Enterprise, or Developer editions. This enables you to configure regular database
backups that utilize durable Azure Blob Storage. Automated Backup depends on the
SQL Server infrastructure as a service (IaaS) Agent Extension.

Prerequisites
To use Automated Backup, review the following prerequisites:

Operating system:

Windows Server 2012 R2 or later

SQL Server version/edition:

SQL Server 2016 or later: Developer, Standard, or Enterprise

7 Note

For SQL Server 2014, see Automated Backup for SQL Server 2014.

Database configuration:

Target user databases must use the full recovery model. System databases don't
have to use the full recovery model. However, if you require log backups to be
taken for model or msdb , you must use the full recovery model. For more
information about the impact of the full recovery model on backups, see Backup
under the full recovery model.
The SQL Server VM has been registered with the SQL IaaS Agent extension and the
Automated Backup feature is enabled. Since Automated Backup relies on the
extension, Automated Backup is only supported on target databases from the
default instance, or a single named instance. If there's no default instance, and
multiple named instances, the SQL IaaS Agent extension fails and Automated
Backup won't work.

Settings
The following table describes the options that can be configured for Automated Backup.
The actual configuration steps vary depending on whether you use the Azure portal or
Azure Windows PowerShell commands. Automated Backup uses backup compression by
default and it can't be disabled.

Basic Settings

Setting Range Description


(Default)

Automated Enable/Disable Enables or disables Automated Backup for an Azure VM running


Backup (Disabled) SQL Server 2016 or later Developer, Standard, or Enterprise.

Retention 1-90 days (90 The number of days to retain backups.


Period days)

Storage Azure storage An Azure storage account to use for storing Automated Backup
Account account files in blob storage. A container is created at this location to store
all backup files. The backup file naming convention includes the
date, time, and database GUID.

Encryption Enable/Disable Enables or disables backup encryption. When backup encryption


(Disabled) is enabled, the certificates used to restore the backup are located
in the specified storage account in the same automaticbackup
container using the same naming convention. If the password
changes, a new certificate is generated with that password, but
the old certificate remains to restore prior backups.

Password Password text A password for encryption keys. This password is only required if
encryption is enabled. In order to restore an encrypted backup,
you must have the correct password and related certificate that
was used at the time the backup was taken.

Advanced Settings

Setting Range (Default) Description


Setting Range (Default) Description

System Enable/Disable When enabled, this feature also backs up the system
Database (Disabled) databases: master , msdb , and model . For the msdb and model
Backups databases, verify that they are in full recovery mode if you
want log backups to be taken. Log backups are never taken for
master , and no backups are taken for tempdb .

Backup Manual/Automated By default, the backup schedule is automatically determined


Schedule (Automated) based on the log growth. Manual backup schedule allows the
user to specify the time window for backups. In this case,
backups only take place at the specified frequency and during
the specified time window of a given day.

Full Daily/Weekly Frequency of full backups. In both cases, full backups begin
backup during the next scheduled time window. When weekly is
frequency selected, backups could span multiple days until all databases
have successfully backed up.

Full 00:00 – 23:00 Start time of a given day during which full backups can take
backup (01:00) place.
start time

Full 1 – 23 hours (1 Duration of the time window of a given day during which full
backup hour) backups can take place.
time
window

Log 5 – 60 minutes (60 Frequency of log backups.


backup minutes)
frequency

Understanding full backup frequency


It's important to understand the difference between daily and weekly full backups.
Consider the following two example scenarios.

Scenario 1: Weekly backups


You have a SQL Server VM that contains a number of large databases.

On Monday, you enable Automated Backup with the following settings:

Backup schedule: Manual


Full backup frequency: Weekly
Full backup start time: 01:00
Full backup time window: 1 hour
This means that the next available backup window is Tuesday at 1 AM for 1 hour. At that
time, Automated Backup begins backing up your databases one at a time. In this
scenario, your databases are large enough that full backups complete for the first couple
databases. However, after one hour not all of the databases have been backed up.

When this happens, Automated Backup begins backing up the remaining databases the
next day, Wednesday at 1 AM for one hour. If not all databases have been backed up in
that time, it tries again the next day at the same time. This continues until all databases
have been successfully backed up.

After it reaches Tuesday again, Automated Backup begins backing up all databases
again.

This scenario shows that Automated Backup only operates within the specified time
window, and each database is backed up once per week. This also shows that it's
possible for backups to span multiple days in the case where it isn't possible to
complete all backups in a single day.

Scenario 2: Daily backups


You have a SQL Server VM that contains a number of large databases.

On Monday, you enable Automated Backup with the following settings:

Backup schedule: Manual


Full backup frequency: Daily
Full backup start time: 22:00
Full backup time window: 6 hours

This means that the next available backup window is Monday at 10 PM for 6 hours. At
that time, Automated Backup begins backing up your databases one at a time.

Then, on Tuesday at 10 for 6 hours, full backups of all databases start again.

) Important

Backups happen sequentially during each interval. For instances with a large
number of databases, schedule your backup interval with enough time to
accommodate all backups. If backups cannot complete within the given interval,
some backups may be skipped, and your time between backups for a single
database may be higher than the configured backup interval time, which could
negatively impact your restore point objective (RPO).
Configure new VMs
Use the Azure portal to configure Automated Backup when you create a new SQL Server
2016 or later machine in the Resource Manager deployment model.

In the SQL Server settings tab, select Enable under Automated Backup.
When you
enable Automated Backup, you can configure the following settings:

Retention period for backups (up to 90 days)


Storage account, and storage container, to use for backups
Encryption option and password for backups
Backup system databases
Configure backup schedule

To encrypt the backup, select Enable. Then specify the Password. Azure creates a
certificate to encrypt the backups and uses the specified password to protect that
certificate.

Choose Select Storage Container to specify the container where you want to store your
backups.

By default the schedule is set automatically, but you can create your own schedule by
selecting Manual, which allows you to configure the backup frequency, backup time
window, and the log backup frequency in minutes.

The following Azure portal screenshot shows the Automated Backup settings when you
create a new SQL Server VM:
Configure existing VMs
For existing SQL Server virtual machines, go to the SQL virtual machines resource and
then select Backups to configure your Automated Backups.

Select Enable to configure your Automated Backup settings.

You can configure the retention period (up to 90 days), the container for the storage
account where you want to store your backups, as well as the encryption, and the
backup schedule. By default, the schedule is automated.
If you want to set your own backup schedule, choose Manual and configure the backup
frequency, whether or not you want system databases backed up, and the transaction
log backup interval in minutes.

When finished, select the Apply button on the bottom of the Backups settings page to
save your changes.

If you're enabling Automated Backup for the first time, Azure configures the SQL Server
IaaS Agent in the background. During this time, the Azure portal might not show that
Automated Backup is configured. Wait several minutes for the agent to be installed,
configured. After that, the Azure portal will reflect the new settings.

Configure with PowerShell


You can use PowerShell to configure Automated Backup. Before you begin, you must:

Download and install the latest Azure PowerShell .


Open Windows PowerShell and associate it with your account with the Connect-
AzAccount command.

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

Install the SQL Server IaaS Extension


If you provisioned a SQL Server virtual machine from the Azure portal, the SQL Server
IaaS Extension should already be installed. You can determine whether it's installed for
your VM by calling Get-AzVM command and examining the Extensions property.

PowerShell

$vmname = "vmname"

$resourcegroupname = "resourcegroupname"

(Get-AzVM -Name $vmname -ResourceGroupName $resourcegroupname).Extensions

If the SQL Server IaaS Agent extension is installed, you should see it listed as
"SqlIaaSAgent" or "SQLIaaSExtension." ProvisioningState for the extension should also
show "Succeeded."

If it isn't installed or it has failed to be provisioned, you can install it with the following
command. In addition to the VM name and resource group, you must also specify the
region ($region) that your VM is located in.

PowerShell

$region = "EASTUS2"

Set-AzVMSqlServerExtension -VMName $vmname `

-ResourceGroupName $resourcegroupname -Name "SQLIaasExtension" `

-Version "2.0" -Location $region

Verify current settings


If you enabled Automated Backup during provisioning, you can use PowerShell to check
your current configuration. Run the Get-AzVMSqlServerExtension command and
examine the AutoBackupSettings property:

PowerShell

(Get-AzVMSqlServerExtension -VMName $vmname -ResourceGroupName


$resourcegroupname).AutoBackupSettings

You should get output similar to the following:

Enable : True

EnableEncryption : False

RetentionPeriod : 30

StorageUrl : https://test.blob.core.windows.net/

StorageAccessKey :

Password :

BackupSystemDbs : False

BackupScheduleType : Manual

FullBackupFrequency : WEEKLY

FullBackupStartTime : 2

FullBackupWindowHours : 2

LogBackupFrequency : 60

If your output shows that Enable is set to False, then you have to enable Automated
Backup. The good news is that you enable and configure Automated Backup in the
same way. See the next section for this information.

7 Note

If you check the settings immediately after making a change, it is possible that you
will get back the old configuration values. Wait a few minutes and check the
settings again to make sure that your changes were applied.

Configure Automated Backup


You can use PowerShell to enable Automated Backup as well as to modify its
configuration and behavior at any time.

First, select, or create a storage account for the backup files. The following script selects
a storage account or creates it if it doesn't exist.
PowerShell

$storage_accountname = "yourstorageaccount"

$storage_resourcegroupname = $resourcegroupname

$storage = Get-AzStorageAccount -ResourceGroupName $resourcegroupname `

-Name $storage_accountname -ErrorAction SilentlyContinue

If (-Not $storage)

{ $storage = New-AzStorageAccount -ResourceGroupName


$storage_resourcegroupname `

-Name $storage_accountname -SkuName Standard_GRS -Location $region }

7 Note

Automated Backup does not support storing backups in premium storage, but it
can take backups from VM disks which use Premium Storage.

Then use the New-AzVMSqlServerAutoBackupConfig command to enable and


configure the Automated Backup settings to store backups in the Azure storage
account. In this example, the backups are set to be retained for 10 days. System
database backups are enabled. Full backups are scheduled for weekly with a time
window starting at 20:00 for two hours. Log backups are scheduled for every 30
minutes. The second command, Set-AzVMSqlServerExtension, updates the specified
Azure VM with these settings.

PowerShell

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -Enable `

-RetentionPeriodInDays 10 -StorageContext $storage.Context `

-ResourceGroupName $storage_resourcegroupname -BackupSystemDbs `

-BackupScheduleType Manual -FullBackupFrequency Weekly `

-FullBackupStartHour 20 -FullBackupWindowInHours 2 `

-LogBackupFrequencyInMinutes 30

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

It could take several minutes to install and configure the SQL Server IaaS Agent.

To enable encryption, modify the previous script to pass the EnableEncryption


parameter along with a password (secure string) for the CertificatePassword parameter.
The following script enables the Automated Backup settings in the previous example
and adds encryption.

PowerShell
$password = "P@ssw0rd"

$encryptionpassword = $password | ConvertTo-SecureString -AsPlainText -Force

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -Enable `

-EnableEncryption -CertificatePassword $encryptionpassword `

-RetentionPeriodInDays 10 -StorageContext $storage.Context `

-ResourceGroupName $storage_resourcegroupname -BackupSystemDbs `

-BackupScheduleType Manual -FullBackupFrequency Weekly `

-FullBackupStartHour 20 -FullBackupWindowInHours 2 `

-LogBackupFrequencyInMinutes 30

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

To confirm your settings are applied, verify the Automated Backup configuration.

Disable Automated Backup


To disable Automated Backup, run the same script without the -Enable parameter to the
New-AzVMSqlServerAutoBackupConfig command. The absence of the -Enable
parameter signals the command to disable the feature. As with installation, it can take
several minutes to disable Automated Backup.

PowerShell

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -ResourceGroupName


$storage_resourcegroupname

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

Example script
The following script provides a set of variables that you can customize to enable and
configure Automated Backup for your VM. In your case, you might need to customize
the script based on your requirements. For example, you would have to make changes if
you wanted to disable the backup of system databases or enable encryption.

PowerShell

$vmname = "yourvmname"

$resourcegroupname = "vmresourcegroupname"

$region = "Azure region name such as EASTUS2"

$storage_accountname = "storageaccountname"

$storage_resourcegroupname = $resourcegroupname

$retentionperiod = 10

$backupscheduletype = "Manual"

$fullbackupfrequency = "Weekly"

$fullbackupstarthour = "20"

$fullbackupwindow = "2"
$logbackupfrequency = "30"

# ResourceGroupName is the resource group which is hosting the VM where you


are deploying the SQL Server IaaS Extension

Set-AzVMSqlServerExtension -VMName $vmname `

-ResourceGroupName $resourcegroupname -Name "SQLIaasExtension" `

-Version "2.0" -Location $region

# Creates/use a storage account to store the backups

$storage = Get-AzStorageAccount -ResourceGroupName $resourcegroupname `

-Name $storage_accountname -ErrorAction SilentlyContinue

If (-Not $storage)

{ $storage = New-AzStorageAccount -ResourceGroupName


$storage_resourcegroupname `

-Name $storage_accountname -SkuName Standard_GRS -Location $region }

# Configure Automated Backup settings

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -Enable `

-RetentionPeriodInDays $retentionperiod -StorageContext $storage.Context


`

-ResourceGroupName $storage_resourcegroupname -BackupSystemDbs `

-BackupScheduleType $backupscheduletype -FullBackupFrequency


$fullbackupfrequency `

-FullBackupStartHour $fullbackupstarthour -FullBackupWindowInHours


$fullbackupwindow `

-LogBackupFrequencyInMinutes $logbackupfrequency

# Apply the Automated Backup settings to the VM

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

Monitoring
To monitor Automated Backup on SQL Server 2016 and later, you have two main
options. Because Automated Backup uses the SQL Server Managed Backup feature, the
same monitoring techniques apply to both.

First, you can poll the status by calling


msdb.managed_backup.sp_get_backup_diagnostics. Or query the
msdb.managed_backup.fn_get_health_status table-valued function.

Another option is to take advantage of the built-in Database Mail feature for
notifications.
1. Call the msdb.managed_backup.sp_set_parameter stored procedure to assign an
email address to the SSMBackup2WANotificationEmailIds parameter.
2. Enable SendGrid to send the emails from the Azure VM.
3. Use the SMTP server and user name to configure Database Mail. You can configure
Database Mail in SQL Server Management Studio or with Transact-SQL commands.
For more information, see Database Mail.
4. Configure SQL Server Agent to use Database Mail.
5. Verify that the SMTP port is allowed both through the local VM firewall and the
network security group for the VM.

Known issues
Consider these known issues when working with the Automated Backup feature.

Can't enable Automated Backup in the Azure portal


The following table lists the possible solutions if you're having issues enabling
Automated Backup from the Azure portal:

Symptom Solution

Enabling Repair the SQL IaaS Agent extension if it's in a failed state.
Automated
Backups will fail if
your IaaS
extension is in a
failed state

Enabling This is a known limitation with the SQL IaaS Agent extension. To work
Automated around this issue, you can enable Managed Backup directly instead of using
Backup fails if you the SQL IaaS Agent extension to configure Automated Backup.
have hundreds of
databases

Enabling Stop the SQL IaaS Agent service. Run the T-SQL command: use msdb exec
Automated autoadmin_metadata_delete . Start the SQL IaaS Agent service and try to re-
Backup fails due to enable Automated Backup from Azure portal.
metadata issues

Enabling Back ups using private endpoints are unsupported. Use the full storage
Automated account URI for your backup.
Backups for FCI
Symptom Solution

Backup Multiple Automated Backup currently only supports one SQL Server instance. If you
SQL instances have multiple named instances, and the default instance, Automated
using Automated Backup works with the default instance. If you have multiple named
Backup instances and no default instance, turning on Automated Backup will fail.

Automated Check the following:


Backup cannot be - The SQL Server Agent is running.

enabled due to - The NT Service\SqlIaaSExtensionQuery account has proper permissions


account and for the Automated Backup feature both within SQL Server, and also for the
permissions SQL virtual machines resource in the Azure portal.

- The SA account hasn't been renamed, though disabling it is acceptable.

Automated Allow Blob Public Access is enabled on the storage Account. This provides
Backup fails for a temporary workaround to a known issue.
SQL 2016 +

Common errors with Automated or Managed Backup


The following table lists possible errors and solutions when working with Automated
Backups:

Symptom Solution

Automated/Managed Backup fails due to Check that the Network Security Group (NSG)
connectivity to storage account/Timeout errors for the virtual network, and the Windows
Firewall aren't blocking outbound
connections from the virtual machine (VM) to
the storage account on port 443.

Automated/Managed Backup fails due to See if you can increase the Max Server
Memory/IO Pressure memory and/or resize the disk/VM if you're
running out of IO/VM limits . If you're using
an availability group, consider offloading your
backups to the secondary replica.

Automated Backup fails after Server Rename If you've renamed your machine hostname,
you need to also rename the hostname inside
SQL Server.

Error: The operation failed because of an This is likely caused by the SQL Server Agent
internal error. The argument must not be empty service not having correct impersonation
string.\r\nParameter name: sas Token Please permissions. Change the SQL Server Agent
retry later service to use a different account to fix this
issue.
Symptom Solution

Error: SQL Server Managed Backup to Microsoft You may see this error if you have a large
Azure cannot configure the default backup number of databases. Use Managed backup
settings for the SQLServer instance because the instead of Automated Backup.
container URL was invalid. It is also possible
that your SAS credential is invalid

Automated Backup job failed after VM Restart Check that the SQL Agent service is up and
running.

Managed backup fails This is a known issue fixed in CU18 for SQL
intermittently/Error:Execution timeout Expired Server 2019 and [KB4040376] for SQL Server
2014-2017.

Error: The remote server returned an error: Repair the SQL IaaS Agent extension.
(403) Forbidden

Error 3202: Write on Storage account failed 13 Remove the immutable blob policy on the
(The data is invalid) storage container and make sure the storage
account is using, at minimum, TLS 1.0.

Disabling Automated Backup or Managed Backup fails


The following table lists the possible solutions if you're having issues disabling
Automated Backup from the Azure portal:

Symptom Solution

Disabling Auto backups Repair the SQL IaaS Agent extension if it's in a failed state.
will fail if your IaaS
extension is in a failed
state

Disabling Automated Stop the SQL IaaS Agent service. Run the T-SQL command: use msdb
Backup fails due to exec autoadmin_metadata_delete . Start SQL Iaas Agent service and try
metadata issues to disable Automated Backup from Azure portal.

Automated Backup Check the following:

cannot be disabled due - The SQL Server Agent is running.

to account and - The NT Service\SqlIaaSExtensionQuery account has proper


permissions permissions for the Automated Backup feature both within SQL Server,
and also for the SQL virtual machines resource in the Azure portal.

- The SA account hasn't been renamed, though disabling it is


acceptable.
I want to find out what service/application is taking SQL
Server backups
In SQL Server Management Studio (SSMS) Object Explorer, right-click the database
> Select Reports > Standard Reports > Backup and Restore Events. In the report,
you can expand the Successful Backup Operations section to see the backup
history.
If you see multiple backups on Azure or to a virtual device, check if you're using
Azure Backup to back up individual SQL databases or taking a virtual machine
snapshot to a virtual device, which uses the NT Authority/SYSTEM account. If you're
not, check the Windows Services console (services.msc) to identify any third-party
applications which may be taking backups.

Next steps
Automated Backup configures Managed Backup on Azure VMs. So it's important to
review the documentation for Managed Backup to understand the behavior and
implications.

You can find additional backup and restore guidance for SQL Server on Azure VMs in
the following article: Backup and restore for SQL Server on Azure virtual machines.

For information about other available automation tasks, see SQL Server IaaS Agent
Extension.

For more information about running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines overview.
Automated Backup for SQL Server 2014
virtual machines (Resource Manager)
Article • 06/27/2023

Applies to:
SQL Server on Azure VM

Automated Backup automatically configures Managed Backup to Microsoft Azure for all
existing and new databases on an Azure VM running SQL Server 2014 Standard or
Enterprise. This enables you to configure regular database backups that utilize durable
Azure Blob storage. Automated Backup depends on the SQL Server infrastructure as a
service (IaaS) Agent Extension.

7 Note

Azure has two different deployment models you can use to create and work with
resources: Azure Resource Manager and classic. This article covers the use of the
Resource Manager deployment model. We recommend the Resource Manager
deployment model for new deployments instead of the classic deployment model.

Prerequisites
To use Automated Backup, consider the following prerequisites:

Operating system:

Windows Server 2012 and greater

SQL Server version/edition:

SQL Server 2014 Standard


SQL Server 2014 Enterprise

7 Note

For SQL 2016 and greater, see Automated Backup for SQL Server 2016.

Database configuration:

Target user databases must use the full recovery model. System databases do not
have to use the full recovery model. However, if you require log backups to be
taken for model or msdb , you must use the full recovery model. For more
information about the impact of the full recovery model on backups, see Backup
under the full recovery model.
The SQL Server VM has been registered with the SQL IaaS Agent extension and the
automated backup feature is enabled. Since automated backup relies on the
extension, automated backup is only supported on target databases from the
default instance, or a single named instance. If there is no default instance, and
multiple named instances, the SQL IaaS Agent extension fails and automated
backup won't work.

Settings
The following table describes the options that can be configured for Automated Backup.
The actual configuration steps vary depending on whether you use the Azure portal or
Azure Windows PowerShell commands. Note that Automated backup uses backup
compression by default and you cannot disable it.

Setting Range Description


(Default)

Automated Enable/Disable Enables or disables Automated Backup for an Azure VM running


Backup (Disabled) SQL Server 2014 Standard or Enterprise.

Retention 1-90 days (90 The number of days to retain a backup.


Period days)

Storage Azure storage An Azure storage account to use for storing Automated Backup
Account account files in blob storage. A container is created at this location to store
all backup files. The backup file naming convention includes the
date, time, and machine name.

Encryption Enable/Disable Enables or disables backup encryption. When backup encryption


(Disabled) is enabled, the certificates used to restore the backup are located
in the specified storage account in the same automaticbackup
container using the same naming convention. If the password
changes, a new certificate is generated with that password, but
the old certificate remains to restore prior backups.

Password Password text A password for encryption keys. This is only required if encryption
is enabled. In order to restore an encrypted backup, you must
have the correct password and related certificate that was used at
the time the backup was taken.

Configure new VMs


Use the Azure portal to configure Automated Backup when you create a new SQL Server
2014 virtual machine in the Resource Manager deployment model.

On the SQL Server settings tab, scroll down to Automated backup and select Enable.
The following Azure portal screenshot shows the SQL Automated Backup settings.

Configure existing VMs


For existing SQL Server VMs, you can enable and disable automated backups, change
the retention period, specify the storage account, and enable encryption from the Azure
portal.

Navigate to the SQL virtual machines resource for your SQL Server 2014 virtual machine
and then select Backups.
When finished, select the Apply button on the bottom of the Backups page to save your
changes.

If you are enabling Automated Backup for the first time, Azure configures the SQL
Server IaaS Agent in the background. During this time, the Azure portal might not show
that Automated Backup is configured. Wait several minutes for the agent to be installed
and configured. After that, the Azure portal will reflect the new settings.

7 Note

You can also configure Automated Backup using a template. For more information,
see Azure quickstart template for Automated Backup .

Configure with PowerShell


You can use PowerShell to configure Automated Backup. Before you begin, you must:

Download and install the latest Azure PowerShell .


Open Windows PowerShell and associate it with your account with the Connect-
AzAccount command.

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

Verify current settings


If you enabled automated backup during provisioning, you can use PowerShell to check
your current configuration. Run the Get-AzVMSqlServerExtension command and
examine the AutoBackupSettings property:

PowerShell

(Get-AzVMSqlServerExtension -VMName $vmname -ResourceGroupName


$resourcegroupname).AutoBackupSettings

You should get output similar to the following:

Enable : False

EnableEncryption : False

RetentionPeriod : -1

StorageUrl : NOTSET

StorageAccessKey :

Password :

BackupSystemDbs : False

BackupScheduleType :

FullBackupFrequency :

FullBackupStartTime :

FullBackupWindowHours :

LogBackupFrequency :

If your output shows that Enable is set to False, then you have to enable automated
backup. The good news is that you enable and configure Automated Backup in the
same way. See the next section for this information.

7 Note

If you check the settings immediately after making a change, it is possible that you
will get back the old configuration values. Wait a few minutes and check the
settings again to make sure that your changes were applied.

Configure Automated Backup


You can use PowerShell to enable Automated Backup as well as to modify its
configuration and behavior at any time.

First, select or create a storage account for the backup files. The following script selects
a storage account or creates it if it does not exist.
PowerShell

$storage_accountname = "yourstorageaccount"

$storage_resourcegroupname = $resourcegroupname

$storage = Get-AzStorageAccount -ResourceGroupName $resourcegroupname `

-Name $storage_accountname -ErrorAction SilentlyContinue

If (-Not $storage)

{ $storage = New-AzStorageAccount -ResourceGroupName


$storage_resourcegroupname `

-Name $storage_accountname -SkuName Standard_GRS -Location $region }

7 Note

Automated Backup does not support storing backups in premium storage, but it
can take backups from VM disks which use Premium Storage.

Then use the New-AzVMSqlServerAutoBackupConfig command to enable and


configure the Automated Backup settings to store backups in the Azure storage
account. In this example, the backups are retained for 10 days. The second command,
Set-AzVMSqlServerExtension, updates the specified Azure VM with these settings.

PowerShell

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -Enable `

-RetentionPeriodInDays 10 -StorageContext $storage.Context `

-ResourceGroupName $storage_resourcegroupname

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

It could take several minutes to install and configure the SQL Server IaaS Agent.

7 Note

There are other settings for New-AzVMSqlServerAutoBackupConfig that apply


only to SQL Server 2016 and Automated Backup. SQL Server 2014 does not support
the following settings: BackupSystemDbs, BackupScheduleType,
FullBackupFrequency, FullBackupStartHour, FullBackupWindowInHours, and
LogBackupFrequencyInMinutes. If you attempt to configure these settings on a
SQL Server 2014 virtual machine, there is no error, but the settings do not get
applied. If you want to use these settings on a SQL Server 2016 virtual machine, see
Automated Backup for SQL Server 2016 Azure virtual machines.
To enable encryption, modify the previous script to pass the EnableEncryption
parameter along with a password (secure string) for the CertificatePassword parameter.
The following script enables the Automated Backup settings in the previous example
and adds encryption.

PowerShell

$password = "P@ssw0rd"

$encryptionpassword = $password | ConvertTo-SecureString -AsPlainText -Force

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -Enable `

-EnableEncryption -CertificatePassword $encryptionpassword `

-RetentionPeriodInDays 10 -StorageContext $storage.Context `

-ResourceGroupName $storage_resourcegroupname

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

To confirm your settings are applied, verify the Automated Backup configuration.

Disable Automated Backup


To disable Automated Backup, run the same script without the -Enable parameter to the
New-AzVMSqlServerAutoBackupConfig command. The absence of the -Enable
parameter signals the command to disable the feature. As with installation, it can take
several minutes to disable Automated Backup.

PowerShell

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -ResourceGroupName


$storage_resourcegroupname

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

Example script
The following script provides a set of variables that you can customize to enable and
configure Automated Backup for your VM. In your case, you might need to customize
the script based on your requirements. For example, you would have to make changes if
you wanted to disable the backup of system databases or enable encryption.

PowerShell
$vmname = "yourvmname"

$resourcegroupname = "vmresourcegroupname"

$region = "Azure region name such as EASTUS2"

$storage_accountname = "storageaccountname"

$storage_resourcegroupname = $resourcegroupname

$retentionperiod = 10

# ResourceGroupName is the resource group which is hosting the VM where you


are deploying the SQL Server IaaS Extension

Set-AzVMSqlServerExtension -VMName $vmname `

-ResourceGroupName $resourcegroupname -Name "SQLIaasExtension" `

-Version "2.0" -Location $region

# Creates/use a storage account to store the backups

$storage = Get-AzStorageAccount -ResourceGroupName $resourcegroupname `

-Name $storage_accountname -ErrorAction SilentlyContinue

If (-Not $storage)

{ $storage = New-AzStorageAccount -ResourceGroupName


$storage_resourcegroupname `

-Name $storage_accountname -SkuName Standard_GRS -Location $region }

# Configure Automated Backup settings

$autobackupconfig = New-AzVMSqlServerAutoBackupConfig -Enable `

-RetentionPeriodInDays $retentionperiod -StorageContext $storage.Context


`

-ResourceGroupName $storage_resourcegroupname

# Apply the Automated Backup settings to the VM

Set-AzVMSqlServerExtension -AutoBackupSettings $autobackupconfig `

-VMName $vmname -ResourceGroupName $resourcegroupname

Monitoring
To monitor Automated Backup on SQL Server 2014, you have two main options. Because
Automated Backup uses the SQL Server Managed Backup feature, the same monitoring
techniques apply to both.

First, you can poll the status by calling msdb.smart_admin.sp_get_backup_diagnostics.


Or query the msdb.smart_admin.fn_get_health_status table valued function.

7 Note

The schema for Managed Backup in SQL Server 2014 is msdb.smart_admin. In SQL
Server 2016 this changed to msdb.managed_backup, and the reference topics use
this newer schema. But for SQL Server 2014, you must continue to use the
smart_admin schema for all Managed Backup objects.

Another option is to take advantage of the built-in Database Mail feature for
notifications.

1. Call the msdb.smart_admin.sp_set_parameter stored procedure to assign an email


address to the SSMBackup2WANotificationEmailIds parameter.
2. Enable SendGrid to send the emails from the Azure VM.
3. Use the SMTP server and user name to configure Database Mail. You can configure
Database Mail in SQL Server Management Studio or with Transact-SQL commands.
For more information, see Database Mail.
4. Configure SQL Server Agent to use Database Mail.
5. Verify that the SMTP port is allowed both through the local VM firewall and the
network security group for the VM.

Next steps
Automated Backup configures Managed Backup on Azure VMs. So it is important to
review the documentation for Managed Backup on SQL Server 2014.

You can find additional backup and restore guidance for SQL Server on Azure VMs in
the following article: Backup and restore for SQL Server on Azure virtual machines.

For information about other available automation tasks, see SQL Server IaaS Agent
Extension.

For more information about running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines overview.
Use the Azure portal to configure a
multiple-subnet availability group
(preview) for SQL Server on Azure VMs
Article • 05/10/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article describes how to use the Azure portal to configure an availability group
for SQL Server on Azure VMs in multiple subnets by creating:

New virtual machines with SQL Server.


A Windows failover cluster.
An availability group.
A listener.

7 Note

This deployment method is currently in preview. It supports SQL Server 2016 and
later on Windows Server 2016 and later.

Deploying a multiple-subnet availability group through the portal provides an easy end-
to-end experience for users. It configures the virtual machines by following the best
practices for high availability and disaster recovery (HADR).

Although this article uses the Azure portal to configure the availability group
environment, you can also do so manually.

7 Note
It's possible to lift and shift your availability group solution to SQL Server on Azure
VMs by using Azure Migrate. To learn more, see Migrate an availability group.

Prerequisites
To configure an Always On availability group by using the Azure portal, you must have
the following prerequisites:

An Azure subscription

A resource group

A virtual network with custom DNS server IP address configured

A domain controller VM in the same virtual network

The following account permissions:

A domain user account that has Create Computer Object permissions in the
domain. This user will create the cluster and availability group, and will install
SQL Server.

For example, a domain user account ( account@domain.com ) typically has


sufficient permission. This account should also be part of the local administrator
group on each VM to create the cluster.

A domain SQL Server service account to control SQL Server. This should be the
same account for every SQL Server VM that you want to add to the availability
group.

Choose an Azure Marketplace image


Use Azure Marketplace to choose one of several preconfigured images from the virtual
machine gallery:

1. In the Azure portal, on the left menu, select Azure SQL. If Azure SQL isn't in the list,
select All services, type Azure SQL in the search box, and select the result.

2. Select + Create to open the Select SQL deployment option pane.

3. Under SQL virtual machines, select the High availability checkbox. In the Image
box, type the version of SQL Server that you're interested in (such as 2019), and
then choose a SQL Server image (such as Free SQL Server License: SQL 2019
Developer on Windows Server 2019).

After you select the High availability checkbox, the portal displays the supported
SQL Server versions, starting with SQL Server 2016.

4. Select Create.

Choose basic settings


On the Basics tab, select the subscription and resource group. Also, provide details for
the SQL Server instances that you're creating for your availability group.

1. From the dropdown lists, choose the subscription and resource group that contain
your domain controller and where you intend to deploy your availability group.

2. Use the slider to select the number of virtual machines that you want to create for
the availability group. The minimum is 2, and the maximum is 9. The virtual
machine names are pre-populated, but you can edit them by selecting Edit names.
3. For Region, select a region. All VMs will be deployed to the same region.

4. For Availability, select either Availability Zone or Availability Set. For more
information about availability options, see Availability.

5. For Security type, select either Standard or Trusted launch.

6. The Image area displays the chosen SQL Server VM image. Use the dropdown to
change the image to deploy. Select Configure VM generation to choose the VM
generation.

7. Select See all sizes for the size of the virtual machines. All created VMs will be the
same size. For production workloads, see the recommended machine sizes and
configuration in Performance best practices for SQL Server on Azure VMs.

8. Under Virtual machine administrator account, provide a username and password.


The password must be at least 12 characters and meet the defined complexity
requirements. This account will be the administrator of the VM.

9. Under SQL Server License, you have the option to enable Azure Hybrid Benefit to
bring your own SQL Server license and save on licensing cost. This option is
available only if you're a Software Assurance customer.

Select Yes if you want to enable Azure Hybrid Benefit, and then confirm that you
have Software Assurance by selecting the checkbox. This option is unavailable if
you selected one of the free SQL Server images, such as the developer edition.
10. Select Next: Networking.

Choose network settings


On the Networking tab, configure your network options:

1. Select the virtual network from the dropdown list. The list is pre-populated based
on the region and resource group that you previously chose on the Basics tab. The
selected virtual network should contain the domain controller VM.

2. Under NIC network security group, select Basic. Choosing a basic security group
allows you to select inbound ports for the SQL Server VM.

3. Configure Public inbound ports, if needed, by selecting Allow selected ports.


Then use the dropdown list to select the allowed common ports.

4. Each virtual machine that you create has to be in its own subnet.

Under Create subnets, select Manage subnet configuration to open the Subnets
pane for the virtual network. Then, either create a subnet (+Subnet) for each
virtual machine or validate that a subnet is available for each virtual machine that
you want to create for the availability group.
When you're done, use the X to close the subnet management pane and go back
to the page for availability group deployment.

5. Choose a Public IP SKU type. All machines will use this public IP type.

6. Use the dropdown lists to assign the subnet, public IP address, and listener IP
address to each VM that you're creating. If you're using a Windows Server 2016
image, you also need to assign the cluster IP address.

When you're assigning a subnet to a virtual machine, the listener and cluster boxes
are pre-populated with available IP addresses. Place your cursor in the box if you
want to edit the IP address. Select Create new if you need to create a new IP
address.

7. If you want to delete the newly created public IP address and NIC when you delete
the VM, select the checkbox.

8. Select Next: WSFC and Credentials.

Choose failover cluster settings


On the WSFC and Credentials tab, provide account information to configure and
manage the Windows Server failover cluster and SQL Server.

For the deployment to work, all the accounts need to already be present in Active
Directory for the domain controller VM. This deployment process doesn't create any
accounts and will fail if you provide an invalid account. For more information about the
required permissions, review Configure cluster accounts in Active Directory.

1. Under Windows Server Failover Cluster details, provide the name that you want
to use for the failover cluster.

2. From the dropdown list, select the storage account that you want to use for the
cloud witness. If one doesn't exist, select Create a new storage account.

3. Under Windows Active Directory Domain details:

For Domain join user name and Domain join password, enter the credentials
for the account that creates the Windows Server failover cluster name in
Active Directory and joins the VMs to the domain. This account must have
Create Computer Objects permissions.

For Domain FQDN, enter a fully qualified domain name, such as


contoso.com.

4. Under SQL Server details, provide the domain-joined account that you want to use
to manage SQL Server on the VMs. You can choose to use the same user that
created the cluster and joined the VMs to the domain by choosing Same as
domain join account. Or you can select Custom and provide different account
details to use with the SQL Server service account.
5. Select Next: Disks.

Choose disk settings


On the Disks tab, configure your disk options for both the virtual machines and the SQL
Server storage configuration:

1. Under OS disk type, select the type of disk that you want for your operating
system. We recommend Premium for production systems, but it isn't available for a
Basic VM. To use a Premium SSD, change the virtual machine size.

2. Select an Encryption type value for the disks.

3. Under Storage configuration, select Change configuration to open the Configure


storage pane and specify storage requirements. You can choose to leave the
default values, or you can manually change the storage topology to suit your
needs for input/output operations per second (IOPS). For more information, see
Configure storage for SQL Server VMs.

4. Under Data storage, choose the location for your data drive, the disk type, and the
number of disks. You can also select the checkbox to store your system databases
on your data drive instead of the local C drive.
5. Under Log storage, you can choose to use the same drive as the data drive for
your transaction log files, or you can select a separate drive from the dropdown
list. You can also choose the name of the drive, the disk type, and the number of
disks.

6. Under TempDb storage, configure your tempdb database settings. Choices include
the location of the database files, the number of files, initial size, and autogrowth
size in megabytes.

Currently, during deployment, the maximum number of tempdb files is eight. But
you can add more files after the SQL Server VM is deployed.
7. Select OK to save your storage configuration settings.

8. Select Next: SQL Server settings.

Choose SQL Server settings


On the SQL Server settings tab, configure specific settings and optimizations for SQL
Server and the availability group:

1. Under Availability group details:

a. Provide the name of the availability group and the listener.

b. Select the role, either Primary or Secondary, for each virtual machine to be
created.

c. Choose the availability group settings that best suit your business needs.
2. Under Security & Networking, select SQL connectivity to access the SQL Server
instance on the VMs. For more information about connectivity options, see
Connectivity.

3. If you require SQL Server authentication, select Enable under SQL Server
Authentication and provide the login name and password. These credentials will
be used across all the VMs that you're deploying. For more information about
authentication options, see Authentication.

4. For Azure Key Vault integration, select Enable if you want to use Azure Key Vault
to store security secrets for encryption. Then, fill in the requested information. To
learn more, see Azure Key Vault integration.

5. Select Change SQL instance settings to modify SQL Server configuration options.
These options include server collation, maximum degree of parallelism (MAXDOP),
minimum and maximum memory, and whether you want to optimize for ad hoc
workloads.

Choose Prerequisites Validation


In order for the deployment to be successful, there are several prerequisite that are
required to be in place. To make it easier to validate that all permissions and
requirements are correct, use the PowerShell prerequisite script that is available for
download on this tab.

The script will be pre-populated with the values provided in the previous steps. Run the
PowerShell script as a domain user on the Domain Controller virtual machine or on a
domain joined Windows Server VM.

Once the script has been executed and the prerequisites have been validated, then
select the confirmation checkbox.
1. Select Review + Create.

2. On the Review + Create tab, review the summary. Then select Create to create the
SQL Servers, failover cluster, availability group, and listener.

If needed, you can select Download a template for automation.

You can monitor the deployment from the Azure portal. The Notifications button at the
top of the screen shows the basic status of the deployment.

After the deployment finishes, you can browse to the SQL virtual machines resource in
the portal. Under Settings, select High Availability to monitor the health of the
availability group. Select the arrow next to the name of your availability group to see a
list of all replicas.


7 Note

Synchronization Health on the High Availability page of the Azure portal will show
Not Healthy until you add databases to your availability group.

Configure a firewall
This deployment creates a firewall rule for the listener on port 5022, but it doesn't
configure a firewall rule for SQL Server VM port 1433. After the virtual machines are
created, you can configure any firewall rules. For more information, see Configure the
firewall.

Add databases to the availability group


Add databases to your availability group after deployment finishes. The following steps
use SQL Server Management Studio, but you can also use Transact-SQL or PowerShell.

1. Connect to one of your SQL Server VMs by using your preferred method, such as a
remote desktop connection (RDP). Use a domain account that's a member of the
sysadmin fixed server role on all of the SQL Server instances.

2. Open SQL Server Management Studio.

3. Connect to your SQL Server instance.

4. In Object Explorer, expand Always On High Availability.

5. Expand Availability Groups, right-click your availability group, and then select Add
Database.
6. Follow the prompts to select the database that you want to add to your availability
group.

7. Select OK to save your settings and add the database.

8. Refresh Object Explorer to confirm the status of your database as synchronized .

After you add databases, you can check your availability group in the Azure portal and
confirm that the status is Healthy.
Modify the availability group
After you deploy your availability group through the portal, all changes to the
availability group need to be done manually. If you want to remove a replica, you can do
so through SQL Server Management Studio or Transact-SQL, and then delete the VM
through the Azure portal. If you want to add a replica, you have to deploy the virtual
machine manually to the resource group, join it to the domain, and add the replica as
you normally would in a traditional on-premises environment.

Remove a cluster
You can remove a cluster by using the latest version of the Azure CLI or PowerShell.

Azure CLI

First, remove all of the SQL Server VMs from the cluster:

Azure CLI

# Remove the VM from the cluster metadata

# example: az sql vm remove-from-group --name SQLVM2 --resource-group


SQLVM-RG

az sql vm remove-from-group --name <VM1 name> --resource-group


<resource group name>

az sql vm remove-from-group --name <VM2 name> --resource-group


<resource group name>

If the SQL Server VMs that you removed were the only VMs in the cluster, then the
cluster will be destroyed. If any other VMs remain in the cluster, those VMs won't be
removed and the cluster won't be destroyed.

Next, remove the cluster metadata from the SQL IaaS Agent extension:

Azure CLI

# Remove the cluster from the SQL VM RP metadata

# example: az sql vm group delete --name Cluster --resource-group SQLVM-


RG

az sql vm group delete --name <cluster name> --resource-group <resource


group name>

Troubleshoot
If you run into problems, you can check the deployment history and then review
common errors and their resolutions.

Changes to the cluster and availability group via the portal happen through
deployments. Deployment history can provide more detail if there are problems with
creating or onboarding the cluster, or with creating the availability group.

To view the logs for the deployment and check the deployment history:

1. Sign in to the Azure portal .

2. Go to your resource group.

3. Under Settings, select Deployments.

4. Select the deployment of interest to learn more about it.

If the deployment fails and you want to redeploy by using the portal, you need to
manually cleanup the resources because deployment through the portal isn't
idempotent (repeatable). These clean-up tasks include deleting VMs and removing
entries in Active Directory and/or DNS. However, if you use the Azure portal to create a
template to deploy your availability group, and then use the template for automation,
clean-up of resources isn't necessary because the template is idempotent.

Next steps
After the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.

To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
Migrate SQL Server availability group to
multi-subnets - SQL Server on Azure
VMs
Article • 04/27/2023

Applies to:
SQL Server on Azure VM

This article teaches you to migrate your Always On availability group (AG) from a single
subnet to multiple subnets to simplify connecting to your listener in Azure with your
SQL Server on Azure virtual machines (VMs).

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

Overview
Customers who are running SQL Server on Azure virtual machines can implement an
Always On availability group (AG) in either a single subnet or multiple subnets (multi-
subnet). A multi-subnet configuration simplifies the availability group environment by
removing the need for an Azure Load Balancer or a Distributed Network Name (DNN) to
route traffic to the listener on the Azure network. While using a multi-subnet approach
is recommended, it requires the connection strings for an application to use
MultiSubnetFailover = true , which might not be possible immediately due to
application-level changes.

If you originally created an availability group in a single subnet and are using an Azure
Load Balancer or DNN for the listener and now want to reduce complexity by moving to
a multi-subnet configuration, you can do so with some manual steps.

Prior to starting a migration of an existing environment, weigh the risks of changing an


in-use environment.

Consider the following two ways to migrate your availability group to multiple subnets:
Create a new environment to perform side-by-side testing
Manually move an existing availability group

U Caution

Performing any migration involves some risk, so as always test thoroughly in a non-
production environment before moving to a production environment.

New environment with side-by-side testing


The first method to move to a multi-subnet availability group is to set up a new
environment. If this is the chosen route, then you need to:

1. Create new virtual machines


2. Create a new availability group in a multi-subnet configuration
3. Backup your current database and restore them to the new environment

Initially in the new multi-subnet environment, create the listener with a different name
than the existing single subnet environment. A newly named listener in a new availability
group allows for side-by-side testing of the application (testing with both the multi-
subnet and the current load balancer or DNN in place).

Once the multi-subnet environment is thoroughly validated, then you could cut over to
the new infrastructure. Depending on the environment (production, test), use a
maintenance window to complete the change. During the maintenance window, restore
the database to the new primary replica, drop the availability group listener in both
environments, and then recreate the listener in the multi-subnet environment using the
same name as the previous listener, the one used in the application connection string.

Setting up a new environment in a multi-subnet configuration is now easier with the


Azure portal deployment experience.

Manually move an existing availability group


The other option is to manually move from the single subnet environment to a multi-
subnet environment. In order to migrate using this method, you need the following
prerequisites:

An IP address for each machine in a new subnet


Connection strings already using MultiSubnetFailover = true
To migrate your availability group to a multi-subnet configuration, follow these steps:

1. Create a new subnet for each secondary, as all virtual machines are currently in the
same subnet.

2. Determine the Cluster IP and Listener IP for all servers in the AG. For example, if
you have an availability group with two nodes, you have the following:

VM Name Subnet Cluster IP Listener IP

VM1 (primary) 10.1.1.0/24 (existing subnet) 10.1.1.15 10.1.1.16

VM2 (secondary) 10.1.2.0/24 (new subnet) 10.1.2.15 10.1.2.16

3. Add the Cluster IP and Listener IP to the primary replica server. Adding these IP
addresses is an online operation.

4. In the Azure portal, move the secondary server to the new subnet by going to the
virtual machine > Networking > Network Interface > IP Configurations. Moving
the server to a new subnet reboots the secondary replica server.

5. Add the Cluster IP and the Listener IP to the secondary replica server. Adding these
IP addresses is an online operation.

6. At this point, since the IP addresses and subnets are in place, so you can delete the
load balancer.

7. Drop the listener.

8. If you're using Windows Server 2019 and later versions, skip this step. If you're
using Windows Server 2016, manually add the cluster IPs to the FCI.

9. Recreate the listener with the new listener IPs.

10. Flush DNS on all servers using ipconfig /flushdns .

Next steps
Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Tutorial: Prerequisites for availability
groups in multiple subnets (SQL Server
on Azure VMs)
Article • 07/10/2023

Applies to: SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

In this tutorial, complete the prerequisites for creating an Always On availability group
for SQL Server on Azure Virtual Machines (VMs) in multiple subnets. At the end of this
tutorial, you will have a domain controller on two Azure virtual machines, two SQL
Server VMs in multiple subnets, and a storage account in a single resource group.

Time estimate: This tutorial creates several resources in Azure and may take up to 30
minutes to complete.

The following diagram illustrates the resources you deploy in this tutorial:
Prerequisites
To complete this tutorial, you need the following:

An Azure subscription. You can open a free Azure account or activate Visual
Studio subscriber benefits.
A basic understanding of, and familiarity with, Always On availability groups in SQL
Server.

Create resource group


To create the resource group in the Azure portal, follow these steps:

1. Sign in to the Azure portal .

2. Select + Create a resource to create a new resource in the portal.


3. Search for resource group in the Marketplace search box and choose the
Resource group tile from Microsoft. Select Create on the Resource group page.

4. On the Create a resource group page, fill out the values to create the resource
group:
a. Choose the appropriate Azure subscription from the drop-down.
b. Provide a name for your resource group, such as SQL-HA-RG.
c. Choose a region from the drop-down, such as West US 2. Be sure to deploy all
subsequent resources to this location as well.
d. Select Review + create to review your resource parameters, and then select
Create to create your resource group.

Create network and subnets


Next, create the virtual network and three subnets. To learn more, see Virtual network
overview.

To create the virtual network in the Azure portal, follow these steps:

1. Go to your resource group in the Azure portal and select + Create

2. Search for virtual network in the Marketplace search box and choose the virtual
network tile from Microsoft. Select Create on the Virtual network page.

3. On the Create virtual network page, enter the following information on the Basics
tab:
a. Under Project details, choose the appropriate Azure Subscription, and the
Resource group you created previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
SQLHAVNET, and choose the same region as your resource group from the
drop-down.

4. On the IP addresses tab, select the default subnet to open the Edit subnet page.
Change the name to DC-subnet to use for the domain controller subnet. Select
Save.
5. Select + Add subnet to add an additional subnet for your first SQL Server VM, and
fill in the following values:
a. Provide a value for the Subnet name, such as SQL-subnet-1.
b. Provide a unique subnet address range within the virtual network address
space. For example, you can iterate the third octet of DC-subnet address range
by 1.

For example, if your DC-subnet range is 10.38.0.0/24, enter the IP address


range 10.38.1.0/24 for SQL-subnet-1.
Likewise, if your DC-subnet IP range is 10.5.0.0/24, then enter 10.5.1.0/24
for the new subnet.

c. Select Add to add your new subnet.


6. Repeat the previous step to add an additional unique subnet range for your
second SQL Server VM with a name such as SQL-subnet-2. You can iterate the
third octet by one again.

For example, if your DC-subnet IP range is 10.38.0.0/24, and your SQL-


subnet-1 is 10.38.1.0/24, then enter 10.38.2.0/24 for the new subnet.
Likewise, if your DC-subnet IP range is 10.5.0.0/24, and your SQL-subnet-1 is
10.5.1.0/24, then enter the IP address range 10.5.2.0/24 for SQL-subnet-2.
7. After you've added the second subnet, review your subnet names and ranges (your
IP address ranges may differ from the image). If everything looks correct, select
Review + create, then Create to create your new virtual network.

Azure returns you to the portal dashboard and notifies you when the new network
is created.
Create domain controllers
After your network and subnets are ready, create a virtual machine (or two optionally,
for high availability) and configure it as your domain controller.

Create DC virtual machines


To create your domain controller (DC) virtual machines in the Azure portal, follow these
steps:

1. Go to your resource group in the Azure portal and select + Create

2. Search for Windows Server in the Marketplace search box.

3. On the Windows Server tile from Microsoft, select the Create drop-down and
choose the Windows Server 2016 Datacenter image.

4. Fill out the values on the Create a virtual machine page to create your domain
controller VM, such as DC-VM-1. Optionally, create an additional VM, such as DC-
VM-2 to provide high availability for the Active Directory Domain Services. Use the
values in the following tablet to create your VM(s):

Field Value

Subscription Your subscription

Resource group SQL-HA-RG

Virtual machine First domain controller: DC-VM-1.


name Second domain controller DC-VM-2.

Region The location where you deployed your resource group and virtual
network.

Availability Availability zone


options For Azure regions that do not support Availability zones, use Availability
sets instead. Create a new availability set and place all VMs created in
this tutorial inside the availability set.
Field Value

Availability zone Specify 1 for DC-VM-1.


Specify 2 for DC-VM-2.

Size D2s_v3 (2 vCPUs, 8 GB RAM)

User name DomainAdmin

Password Contoso!0000

Public inbound Allow selected ports


ports

Select inbound RDP (3389)


ports

OS disk type Premium SSD (locally redundant storage)

Virtual network SQLHAVNET

Subnet DC-subnet

Public IP Same name as the VM, such as DC-VM-1 or DC-VM-2

NIC network Basic


security group

Public inbound Allow selected ports


ports

Select inbound RDP (3389)


ports

Boot diagnostics Enable with managed storage account (recommended).

Azure notifies you when your virtual machines are created and ready to use.

Configure the domain controller


After your DC virtual machines are ready, configure the domain controller for
corp.contoso.com.

To configure DC-VM-1 as the domain controller, follow these steps:

1. Go to your resource group in the Azure portal and select the DC-VM-1 machine.

2. On the DC-VM-1 page, select Connect to download an RDP file for remote
desktop access and then open the file.
3. Connect to the RDP session using your configured administrator account
(DomainAdmin) and password (Contoso!0000).

4. Open the Server Manager dashboard (which may open by default) and choose to
Add roles and features.

5. Select Next until you get to the Server Roles section.

6. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.

7 Note

Windows warns you that there is no static IP address. If you're testing the
configuration, select Continue. For production scenarios, set the IP address to
static in the Azure portal, or use PowerShell to set the static IP address of the
domain controller machine.
7. Select Next until you reach the Confirmation section. Select the Restart the
destination server automatically if required check box.

8. Select Install.

9. After the features finish installing, return to the Server Manager dashboard.

10. Select the new AD DS option on the left-hand pane.

11. Select the More link on the yellow warning bar.

12. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.

13. In the Active Directory Domain Services Configuration Wizard, use the following
values:
Page Setting

Deployment Configuration Add a new forest


Root domain name = corp.contoso.com

Domain Controller Options DSRM Password = Contoso!0000


Confirm Password = Contoso!0000

14. Select Next to go through the other pages in the wizard. On the Prerequisites
Check page, verify that you see the following message: All prerequisite checks
passed successfully. You can review any applicable warning messages, but it's
possible to continue with the installation.

15. Select Install. The DC-VM-1 virtual machine automatically restarts.

Identify DNS IP address


Use the primary domain controller for DNS. To do so, identify the private IP address of
the VM used for the primary domain controller.

To identify the private IP address of the VM in the Azure portal, follow these steps:

1. Go to your resource group in the Azure portal and select the primary domain
controller, DC-VM-1.
2. On the DC-VM-1 page, choose Networking in the Settings pane.
3. Note the NIC Private IP address. Use this IP address as the DNS server for the
other virtual machines. In the example image, the private IP address is 10.38.0.4.

Configure virtual network DNS


After you create the first domain controller and enable DNS, configure the virtual
network to use this VM for DNS.
To configure your virtual network for DNS, follow these steps:

1. Go to your resource group in the Azure portal , and select your virtual network,
such as SQLHAVNET.
2. Select DNS servers under the Settings pane and then select Custom.
3. Enter the private IP address you identified previously in the IP Address field, such
as 10.38.0.4 .
4. Select Save.

Configure second domain controller


After the primary domain controller restarts, you can optionally configure the second
domain controller for the purpose of high availability. If you do not want to configure a
second domain controller, skip this step. However, a second domain controller is
recommended in production environments.

Set the preferred DNS server address, join the domain, and then configure the
secondary domain controller.

Set preferred DNS server address

The preferred DNS server address should not be updated directly within a VM, it should
be edited from the Azure portal, or PowerShell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:
1. Sign-in to the Azure portal .

2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.

3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.

4. In Settings, select DNS servers.

5. Select either:

Inherit from virtual network: Choose this option to inherit the DNS server
setting defined for the virtual network the network interface is assigned to.
This would automatically inherit the primary domain controller as the DNS
server.

Custom: You can configure your own DNS server to resolve names across
multiple virtual networks. Enter the IP address of the server you want to use
as a DNS server. The DNS server address you specify is assigned only to this
network interface and overrides any DNS setting for the virtual network the
network interface is assigned to. If you select custom, then input the IP
address of the primary domain controller, such as 10.38.0.4 .

6. Select Save.

7. If using a Custom DNS Server, return to the virtual machine in the Azure portal and
restart the VM.

Join the domain


Next, join the corp.contoso.com domain. To do so, follow these steps:

1. Remotely connect to the virtual machine using the BUILTIN\DomainAdmin


account. This account is the same one used when creating the domain controller
virtual machines.
2. Open Server Manager, and select Local Server.
3. Select WORKGROUP.
4. In the Computer Name section, select Change.
5. Select the Domain checkbox and type corp.contoso.com in the text box. Select
OK.
6. In the Windows Security popup dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the popup dialog.

Configure domain controller


Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:

1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).

2. Select the Add roles and features link on the dashboard.

3. Select Next until you get to the Server Roles section.

4. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.

5. After the features finish installing, return to the Server Manager dashboard.

6. Select the new AD DS option on the left-hand pane.

7. Select the More link on the yellow warning bar.

8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.

9. Under Deployment Configuration, select Add a domain controller to an existing


domain.
10. Click Select.

11. Connect by using the administrator account


(CORP.CONTOSO.COM\domainadmin) and password (Contoso!0000).

12. In Select a domain from the forest, choose your domain and then select OK.

13. In Domain Controller Options, use the default values and set a DSRM password.

7 Note

The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.

14. Select Next until the dialog reaches the Prerequisites check. Then select Install.

After the server finishes the configuration changes, restart the server.

Add second DC IP address to DNS


After your second domain controller is configured, follow the same steps as before to
identify the private IP address of the VM, and add the private IP address as a secondary
custom DNS server in the virtual network of your resource group. Adding the secondary
DNS server in the Azure portal enables redundancy of the DNS service.

Configure domain accounts


After your domain controller(s) have been configured, and you've set your DNS server(s)
in the Azure portal, create domain accounts for the user who is installing SQL Server,
and for the SQL Server service account.

Configure two accounts in total, one installation account and then a service account for
both SQL Server VMs. For example, use the values in the following table for the
accounts:

Account VM Full domain Description


name

Install Both Corp\Install Log in to either VM with this account to configure the
cluster and availability group.

SQLSvc Both Corp\SQLSvc Use this account for the SQL Server service on both SQL
Account VM Full domain Description
name

Server VMs.

Follow these steps to create each account:

1. Connect to your primary domain controller machine, such as DC-VM-1.

2. In Server Manager, select Tools, and then select Active Directory Administrative
Center.

3. Select corp (local) from the left pane.

4. On the right Tasks pane, select New, and then select User.

5. Enter in the new user account and set a complex password. For non-production
environments, set the user account to never expire.

6. Select OK to create the user.

7. Repeat these steps to create all accounts.

Grant installation account permissions


Once the accounts are created, grant required domain permissions to the installation
account so the account is able to create objects in AD.

To grant the permissions to the installation account, follow these steps:

1. Open the Active Directory Administrative Center from Server Manager, if it's not
open already.

2. Select corp (local) in the left pane.


3. In the right-hand Tasks pane, verify you see corp (local) in the drop-down, and
then select Properties underneath.

4. Select Extensions, and then select the Advanced button on the Security tab.

5. On the Advanced Security Settings for corp dialog box, select Add.

6. Select Select a principal, search for CORP\Install, and then select OK.

7. Check the boxes next to Read all properties and Create Computer Objects.
8. Select OK, and then select OK again. Close the corp properties window.

Now that you've finished configuring Active Directory and the user objects, you are
ready to create your SQL Server VMs.

Create SQL Server VMs


Once your AD, DNS, and user accounts are configured, you are ready to create your SQL
Server VMs. For simplicity, use the SQL Server VM images in the marketplace.

However, before creating your SQL Server VMs, consider the following design decisions:

Availability - Availability Zones


For the highest level of redundancy, resiliency and availability deploy the VMs within
separate Availability Zones. Availability Zones are unique physical locations within an
Azure region. Each zone is made up of one or more datacenters with independent
power, cooling, and networking. For Azure regions that do not support Availability
Zones yet, use Availability Sets instead. Place all the VMs within the same Availability Set.

Storage - Azure Managed Disks

For the virtual machine storage, use Azure Managed Disks. Microsoft recommends
Managed Disks for SQL Server virtual machines as they handle storage behind the
scenes. For more information, see Azure Managed Disks Overview.

Network - Private IP addresses in production

For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to the virtual machine over the internet and makes
configuration steps easier. In production environments, Microsoft recommends only
private IP addresses in order to reduce the vulnerability footprint of the SQL Server
instance VM resource.

Network - Single NIC per server

Use a single NIC per server (cluster node). Azure networking has physical redundancy,
which makes additional NICs unnecessary on a failover cluster deployed to an Azure
virtual machine. The cluster validation report warns you that the nodes are reachable
only on a single network. You can ignore this warning when your failover cluster is on
Azure virtual machines.

To create your VMs, follow these steps:

1. Go to your resource group in the Azure portal and select + Create.

2. Search for Azure SQL and select the Azure SQL tile from Microsoft.

3. On the Azure SQL page, select Create and then choose the SQL Server 2016 SP2
Enterprise on Windows Server 2016 image from the drop-down.
Use the following table to fill out the values on the Create a virtual machine page to
create both SQL Server VMs, such as SQL-VM-1 and SQL-VM-2 (your IP addresses may
differ from the examples in the table):

Configuration SQL-VM-1 SQL-VM-2

Gallery image SQL Server 2016 SP2 Enterprise on SQL Server 2016 SP2 Enterprise on
Windows Server 2016 Windows Server 2016

VM basics Name = SQL-VM-1 Name = SQL-VM-2


User Name = DomainAdmin User Name = DomainAdmin
Password = Contoso!0000 Password = Contoso!0000
Subscription = Your subscription Subscription = Your subscription
Resource group = SQL-HA-RG Resource group = SQL-HA-RG
Location = Your Azure location Location = Your Azure location

VM Size SIZE = E2ds_v4 (2 vCPUs, 16 GB RAM) SIZE = E2ds_v4 (2 vCPUs, 16 GB RAM)

VM Settings Availability options = Availability Availability options = Availability


zone zone
Availability zone = 1 Availability zone = 2
Public inbound ports = Allow Public inbound ports = Allow
selected ports selected ports
Select inbound ports = RDP (3389) Select inbound ports = RDP (3389)
OS disk type = Premium SSD (locally OS disk type = Premium SSD (locally
redundant storage) redundant storage)
Virtual network = SQLHAVNET Virtual network = SQLHAVNET
Subnet = SQL-subnet-1(10.38.1.0/24) Subnet = SQL-subnet-2(10.38.2.0/24)
Public IP address = Automatically Public IP address = Automatically
generated. generated.
NIC network security group = Basic NIC network security group = Basic
Public inbound ports = Allow Public inbound ports = Allow
Configuration SQL-VM-1 SQL-VM-2

selected ports selected ports


Select inbound ports = RDP (3389) Select inbound ports = RDP (3389)
Boot Diagnostics = Enable with Boot Diagnostics = Enable with
managed storage account managed storage account
(recommended) (recommended)

SQL Server SQL connectivity = Private (within SQL connectivity = Private (within
settings Virtual Network) Virtual Network)
Port = 1433 Port = 1433
SQL Authentication = Disable SQL Authentication = Disable
Azure Key Vault integration = Azure Key Vault integration =
Disable Disable
Storage optimization = Transactional Storage optimization = Transactional
processing processing
SQL Data = 1024 GiB, 5000 IOPS, 200 SQL Data = 1024 GiB, 5000 IOPS, 200
MB/s MB/s
SQL Log = 1024 GiB, 5000 IOPS, 200 SQL Log = 1024 GiB, 5000 IOPS, 200
MB/s MB/s
SQL TempDb = Use local SSD drive SQL TempDb = Use local SSD drive
Automated patching = Sunday at Automated patching = Sunday at
2:00 2:00
Automated backup = Disable Automated backup = Disable

7 Note

These suggested machine sizes are only intended for testing availability groups in
Azure Virtual Machines. For optimized production workloads, see the size
recommendations in Performance best practices for SQL Server on Azure VMs.

Configure SQL Server VMs


After VM creation completes, configure your SQL Server VMs by adding a secondary IP
address to each VM, and joining them to the domain.

Add secondary IPs to SQL Server VMs


In the multi-subnet environment, assign secondary IP addresses to each SQL Server VM
to use for the availability group listener, and for Windows Server 2016 and earlier, assign
secondary IP addresses to each SQL Server VM for the cluster IP address as well. Doing
this negates the need for an Azure Load Balancer, as is the requirement in a single
subnet environment.
On Windows Server 2016 and earlier, you need to assign an additional secondary IP
address to each SQL Server VM to use for the windows cluster IP since the cluster uses
the Cluster Network Name rather than the default Distributed Network Name (DNN)
introduced in Windows Server 2019. With a DNN, the cluster name object (CNO) is
automatically registered with the IP addresses for all the nodes of the cluster,
eliminating the need for a dedicated windows cluster IP address.

If you're on Windows Server 2016 and prior, follow the steps in this section to assign a
secondary IP address to each SQL Server VM for both the availability group listener, and
the cluster.

If you're on Windows Server 2019 or later, only assign a secondary IP address for the
availability group listener, and skip the steps to assign a windows cluster IP, unless you
plan to configure your cluster with a virtual network name (VNN), in which case assign
both IP addresses to each SQL Server VM as you would for Windows Server 2016.

To assign additional secondary IPs to the VMs, follow these steps:

1. Go to your resource group in the Azure portal and select the first SQL Server
VM, such as SQL-VM-1.

2. Select Networking in the Settings pane, and then select the Network Interface:

3. On the Network Interface page, select IP configurations in the Settings pane and
then choose + Add to add an additional IP address:
4. On the Add IP configuration page, do the following:
a. Specify the Name as the Windows Cluster IP, such as windows-cluster-ip for
Windows 2016 and earlier. Skip this step if you're on Windows Server 2019 or
later.
b. Set the Allocation to Static.
c. Enter an unused IP address in the same subnet (SQL-subnet-1) as the SQL
Server VM (SQL-VM-1), such as 10.38.1.10 .
d. Leave the Public IP address at the default of Disassociate.
e. Select OK to finish adding the IP configuration.
5. Select + Add again to configure an additional IP address for the availability group
listener (with a name such as availability-group-listener), again specifying an
unused IP address in SQL-subnet-1 such as 10.38.1.11 :
6. Repeat these steps again for the second SQL Server VM, such as SQL-VM-2. Assign
two unused secondary IP addresses within SQL-subnet-2. Use the values from the
following table to add the IP configuration:

Field Input Input

Name windows-cluster-ip availability-group-listener

Allocation Static Static

IP address 10.38.2.10 10.38.2.11

Now you are ready to join the corp.contoso.com.

Join the servers to the domain


Once your two secondary IP addresses have been assigned to both SQL Server VMs, join
each SQL Server VM to the corp.contoso.com domain.

To join the corp.contoso.com domain, follow the same steps for the SQL Server VM as
you did when you joined the domain with the secondary domain controller.

Wait for each SQL Server VM to restart, and then you can add your accounts.

Add accounts
Add the installation account as an administrator on each VM, grant permission to the
installation account and local accounts within SQL Server, and update the SQL Server
service account.

Add install account


Once both SQL Server VMs have joined the domain, add CORP\Install as a member of
the local administrators group.

 Tip

Be sure you sign in with the domain administrator account. In previous steps, you
were using the BUILTIN administrator account. Now that the server is part of the
domain, use the domain account. In your RDP session, specify DOMAIN\username,
such as CORP\DomainAdmin.

To add the account as an admin, follow these steps:

1. Wait until the VM is restarted, then launch the RDP file again from the first SQL
Server VM to sign in to SQL-VM-1 by using the CORP\DomainAdmin account.
2. In Server Manager, select Tools, and then select Computer Management.
3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.
4. Double-click the Administrators group.
5. In the Administrators Properties dialog, select the Add button.
6. Enter the user CORP\Install, and then select OK.
7. Select OK to close the Administrator Properties dialog.
8. Repeat these steps on SQL-VM-2.

Add account to sysadmin


The installation account (CORP\install) used to configure the availability group must be
part of the sysadmin fixed server role on each SQL Server VM.

To grant sysadmin rights to the installation account, follow these steps:

1. Connect to the server through the Remote Desktop Protocol (RDP) by using the
<MachineName>\DomainAdmin account, such as SQL-VM-1\DomainAdmin .
2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.
3. In Object Explorer, select Security.
4. Right-click Logins. Select New Login.
5. In Login - New, select Search.
6. Select Locations.
7. Enter the domain administrator network credentials.
8. Use the installation account (CORP\install).
9. Set the sign-in to be a member of the sysadmin fixed server role.
10. Select OK.
11. Repeat these steps on the second SQL Server VM, such as SQL-VM-2, connecting
with the relevant machine name account, such as SQL-VM-2\DomainAdmin .

Add system account


In later versions of SQL Server, the [NT AUTHORITY\SYSTEM] account does not have
permissions to SQL Server by default, and must be granted manually.

To add the [NT AUTHORITY\SYSTEM] and grant appropriate permissions, follow these
steps:

1. Connect to the first SQL Server VM through the Remote Desktop Protocol (RDP) by
using the <MachineName>\DomainAdmin account, such as SQL-VM-1\DomainAdmin .

2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.

3. Create an account for [NT AUTHORITY\SYSTEM] on each SQL Server instance by


using the following Transact-SQL (T-SQL) command:

SQL

USE [master]
GO
CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE=
[master]
GO

4. Grant the following permissions to [NT AUTHORITY\SYSTEM] on each SQL Server


instance:

ALTER ANY AVAILABILITY GROUP

CONNECT SQL
VIEW SERVER STATE

To grant these permissions, use the following Transact-SQL (T-SQL) command:


SQL

GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]


GO
GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]
GO
GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]
GO

5. Repeat these steps on the second SQL Server VM, such as SQL-VM-2, connecting
with the relevant machine name account, such as SQL-VM-2\DomainAdmin .

Set the SQL Server service accounts


The SQL Server service on each VM needs to use a dedicated domain account. Use the
domain accounts you created earlier: Corp\SQLSvc for both SQL-VM-1 and SQL-VM-2.

To set the service account, follow these steps:

1. Connect to the first SQL Server VM through the Remote Desktop Protocol (RDP) by
using the <MachineName>\DomainAdmin account, such as SQL-VM-1\DomainAdmin .
2. Open SQL Server Configuration Manager.
3. Right-click the SQL Server service, and then select Properties.
4. Provide the account (Corp\SQLSvc) and password.
5. Select Apply to commit your change and restart the SQL Server service.
6. Repeat these steps on the other SQL Server VM (SQL-VM-1), signing in with the
machine domain account, such as SQL-VM-2\DomainAdmin , and providing the service
account (Corp\SQLSvc).

Create Azure Storage Account


To deploy a two-node Windows Server Failover Cluster, a third member is necessary to
establish quorum. On Azure VMs, the cloud witness is the recommended quorum
option. To configure a cloud witness, you need an Azure Storage account. To learn more,
see Deploy a Cloud Witness for a Failover Cluster.

To create the Azure Storage Account in the portal:

1. In the portal, open the SQL-HA-RG resource group and select + Create

2. Search for storage account.

3. Select Storage account and select Create, configuring it with the following values:
a. Select your subscription and select the resource group SQL-HA-RG.
b. Enter a Storage Account Name for your storage account. Storage account
names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only. The storage account name must also be
unique within Azure.
c. Select your Region.
d. For Performance, select Standard: Recommended for most scenarios (general-
purpose v2 account). Azure Premium Storage is not supported for a cloud
witness.
e. For Redundancy, select Locally redundant storage (LRS). Failover Clustering
uses the blob file as the arbitration point, which requires some consistency
guarantees when reading the data. Therefore you must select Locally redundant
storage for the Replication type.
f. Select Review + create

Configure the firewall


The availability group feature relies on traffic through the following TCP ports:

SQL Server VM: Port 1433 for a default instance of SQL Server.
Database mirroring endpoint: Any available port. Examples frequently use 5022.

Open these firewall ports on both SQL Server VMs. The method of opening the ports
depends on the firewall solution that you use, and may vary from the Windows Firewall
example provided in this section.

To open these ports on a Windows Firewall, follow these steps:

1. On the first SQL Server Start screen, launch Windows Firewall with Advanced
Security.

2. On the left pane, select Inbound Rules. On the right pane, select New Rule.

3. For Rule Type, choose Port.

4. For the port, specify TCP and type the appropriate port numbers. See the following
example:
5. Select Next.

6. On the Action page, select Allow the connection , and then select Next.

7. On the Profile page, accept the default settings, and then select Next.

8. On the Name page, specify a rule name (such as SQL Inbound) in the Name text
box, and then select Finish.

9. Repeat these steps on the second SQL Server VM.

Next steps
Now that you've configured the prerequisites, get started with configuring your
availability group in multiple subnets.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
HADR settings for SQL Server on Azure VMs
Tutorial: Configure an availability group
in multiple subnets (SQL Server on
Azure VMs)
Article • 07/10/2023

Applies to: SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This tutorial shows how to create an Always On availability group for SQL Server on
Azure Virtual Machines (VMs) within multiple subnets. The complete tutorial creates a
Windows Server Failover Cluster, and an availability group with a two SQL Server replicas
and a listener.

Time estimate: Assuming your prerequisites are complete, this tutorial should take
about 30 minutes to complete.

Prerequisites
The following table lists the prerequisites that you need to complete before starting this
tutorial:

Requirement Description

Two SQL Server - Each VM in two different Azure availability zones or the same
instances availability set
- In separate subnets within an Azure Virtual Network
- With two secondary IPs assigned to each VM
- In a single domain

SQL Server service A domain account used by the SQL Server service for each machine
account
Requirement Description

Open firewall ports - SQL Server: 1433 for default instance


- Database mirroring endpoint: 5022 or any available port

Domain installation - Local administrator on each SQL Server


account - Member of SQL Server sysadmin fixed server role for each
instance of SQL Server

The tutorial assumes you have a basic understanding of SQL Server Always On
availability groups.

Create the cluster


The Always On availability group lives on top of the Windows Server Failover Cluster
infrastructure, so before deploying your availability group, you must first configure the
Windows Server Failover Cluster, which includes adding the feature, creating the cluster,
and setting the cluster IP address.

Add failover cluster feature


Add the failover cluster feature to both SQL Server VMs. To do so, follow these steps:

1. Connect to the SQL Server virtual machine through the Remote Desktop Protocol
(RDP) using a domain account that has permissions to create objects in AD, such
as the CORP\Install domain account created in the prerequisites article.

2. Open Server Manager Dashboard.

3. Select the Add roles and features link on the dashboard.


4. Select Next until you get to the Server Features section.

5. In Features, select Failover Clustering.

6. Add any additional required features.

7. Select Install to add the features.

8. Repeat the steps on the other SQL Server VM.

Create cluster
After the cluster feature has been added to each SQL Server VM, you're ready to create
the Windows Server Failover Cluster.

To create the cluster, follow these steps:

1. Use Remote Desktop Protocol (RDP) to connect to the first SQL Server VM (such as
SQL-VM-1) using a domain account that has permissions to create objects in AD,
such as the CORP\Install domain account created in the prerequisites article.

2. In the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.

3. In the left pane, right-click Failover Cluster Manager, and then select Create a
Cluster.

4. In the Create Cluster Wizard, create a two-node cluster by stepping through the
pages using the settings provided in the following table:

Page Settings

Before You Begin Use defaults.

Select Servers Type the first SQL Server name (such as SQL-VM-1) in Enter server
name and select Add.
Page Settings

Type the second SQL Server name (such as SQL-VM-2) in Enter


server name and select Add.

Validation Warning Select Yes. When I click Next, run configuration validation tests,
and then return to the process of creating the cluster.

Before you Begin Select Next.

Testing Options Choose Run only the tests I select.

Test Selection Uncheck Storage. Ensure Inventory, Network and System


Configuration are selected.

Confirmation Select Next.


Wait for the validation to complete.
Select View Report to review the report. You can safely ignore the
warning regarding VMs being reachable on only one network
interface. Azure infrastructure has physical redundancy and therefore
it is not required to add additional network interfaces.
Select Finish.

Access Point for Type a cluster name, for example SQLAGCluster1 in Cluster Name.
Administering the
Cluster

Confirmation Uncheck Add all eligible storage to the cluster and select Next.

Summary Select Finish.

2 Warning

If you do not uncheck Add all eligible storage to the cluster, Windows
detaches the virtual disks during the clustering process. As a result, they don't
appear in Disk Manager or Explorer until the storage is removed from the
cluster and reattached using PowerShell.

Set the failover cluster IP address


Typically, the IP address assigned to the cluster is the same IP address assigned to the
VM, which means that in Azure, the cluster IP address will be in a failed state, and
cannot be brought online. Change the cluster IP address to bring the IP resource online.

During the prerequisites, you should have assigned secondary IP addresses to each SQL
Server VM, as the example table below (your specific IP addresses may vary):
VM Subnet Subnet address Secondary IP Secondary IP
Name name range name address

SQL-VM- SQL-subnet- 10.38.1.0/24 windows-cluster-ip 10.38.1.10


1 1

SQL-VM- SQL-subnet- 10.38.2.0/24 windows-cluster-ip 10.38.2.10


2 2

Assign these IP addresses as the cluster IP addresses for each relevant subnet.

7 Note

On Windows Server 2019, the cluster creates a Distributed Server Name instead of
the Cluster Network Name, and the cluster name object (CNO) is automatically
registered with the IP addresses for all of the nodes in the cluster, eliminating the
need for a dedicated windows cluster IP address. If you're on Windows Server 2019,
either skip this section, and any other steps that refer to the Cluster Core
Resources or create a virtual network name (VNN)-based cluster using PowerShell.
See the blog Failover Cluster: Cluster Network Object for more information.

To change the cluster IP address, follow these steps:

1. In Failover Cluster Manager, scroll down to Cluster Core Resources and expand
the cluster details. You should see the Name and two IP Address resources from
each subnet in the Failed state.

2. Right-click the first failed IP Address resource, and then select Properties.
3. Select Static IP Address and update the IP address to the dedicated windows
cluster IP address in the subnet you assigned to the first SQL Server VM (such as
SQL-VM-1). Select OK.
4. Repeat the steps for the second failed IP Address resource, using the dedicated
windows cluster IP address for the subnet of the second SQL Server VM (such as
SQL-VM-2).
5. In the Cluster Core Resources section, right-click cluster name and select Bring
Online. Wait until the name and one of the IP address resources are online.

Since the SQL Server VMs are in different subnets the cluster will have an OR
dependency on the two dedicated windows cluster IP addresses. When the cluster name
resource comes online, it updates the domain controller (DC) server with a new Active
Directory (AD) computer account. If the cluster core resources move nodes, one IP
address goes offline, while the other comes online, updating the DC server with the new
IP address association.

 Tip

When running the cluster on Azure VMs in a production environment, change the
cluster settings to a more relaxed monitoring state to improve cluster stability and
reliability in a cloud environment. To learn more, see SQL Server VM - HADR
configuration best practices.
Configure quorum
On a two node cluster, a quorum device is necessary for cluster reliability and stability.
On Azure VMs, the cloud witness is the recommended quorum configuration, though
there are other options available. The steps in this section configure a cloud witness for
quorum. Identify the access keys to the storage account and then configure the cloud
witness.

Get access keys for storage account


When you create a Microsoft Azure Storage Account, it is associated with two Access
Keys that are automatically generated - primary access key and secondary access key.
Use the primary access key the first time you create the cloud witness, but subsequently
there are no restrictions to which key to use for the cloud witness.

Use the Azure portal to view and copy storage access keys for the Azure Storage
Account created in the prerequisites article.

To view and copy the storage access keys, follow these steps:

1. Go to your resource group in the Azure portal and select the storage account
you created.

2. Select Access Keys under Security + networking.

3. Select Show Keys and copy the key.

Configure cloud witness


After you have the access key copied, create the cloud witness for the cluster quorum.

To create the cloud witness, follow these steps:

1. Connect to the first SQL Server VM SQL-VM-1 with remote desktop.

2. Open Windows PowerShell in Administrator mode.

3. Run the PowerShell script to set TLS (Transport Layer Security) value for the
connection to 1.2:

PowerShell

[Net.ServicePointManager]::SecurityProtocol =
[Net.SecurityProtocolType]::Tls12

4. Use PowerShell to configure the cloud witness. Replace the values for storage
account name and access key with your specific information:

PowerShell

Set-ClusterQuorum -CloudWitness -AccountName "Storage_Account_Name" -


AccessKey "Storage_Account_Access_Key"

5. The following example output indicates success:

The cluster core resources are configured with a cloud witness.

Enable AG feature
The Always On availability group feature is disabled by default. Use the SQL Server
Configuration Manager to enable the feature on both SQL Server instances.

To enable the availability group feature, follow these steps:

1. Launch the RDP file to the first SQL Server VM (such as SQL-VM-1) with a domain
account that is a member of sysadmin fixed server role, such as the CORP\Install
domain account created in the prerequisites document

2. From the Start screen of one your SQL Server VMs, launch SQL Server
Configuration Manager.
3. In the browser tree, highlight SQL Server Services, right-click the SQL Server
(MSSQLSERVER) service and select Properties.

4. Select the Always On High Availability tab, then check the box to Enable Always
On availability groups:

5. Select Apply. Select OK in the pop-up dialog.

6. Restart the SQL Server service.

7. Repeat these steps for the other SQL Server instance.

Create database
For your database, you can either follow the steps in this section to create a new
database, or restore an AdventureWorks database. You also need to back up the
database to initialize the log chain. Databases that have not been backed up do not
meet the prerequisites for an availability group.

To create a database, follow these steps:

1. Launch the RDP file to the first SQL Server VM (such as SQL-VM-1) with a domain
account that is a member of the sysadmin fixed server role, such as the
CORP\Install domain account created in the prerequisites document.
2. Open SQL Server Management Studio and connect to the SQL Server instance.
3. In Object Explorer, right-click Databases and select New Database.
4. In Database name, type MyDB1.
5. Select the Options page, and choose Full from the Recovery model drop-down, if
it's not full by default. The database must be in full recovery mode to meet the
prerequisites of participating in an availability group.
6. Select OK to close the New Database page and create your new database.

To back up the database, follow these steps:


1. In Object Explorer, right-click the database, highlight Tasks, and then select Back
Up....

2. Select OK to take a full backup of the database to the default backup location.

Create file share


Create a backup file share that both SQL Server VMs and their service accounts have
access to.

To create the backup file share, follow these steps:

1. On the first SQL Server VM in Server Manager, select Tools. Open Computer
Management.

2. Select Shared Folders.

3. Right-click Shares, and select New Share... and then use the Create a Shared
Folder Wizard to create a share.

4. For Folder Path, select Browse and locate or create a path for the database backup
shared folder, such as C:\Backup . Select Next.

5. In Name, Description, and Settings verify the share name and path. Select Next.

6. On Shared Folder Permissions set Customize permissions. Select Custom....

7. On Customize Permissions, select Add....

8. Check Full Control to grant full access to the share the SQL Server service account
( Corp\SQLSvc ):
9. Select OK.

10. In Shared Folder Permissions, select Finish. Select Finish again.

Create availability group


After your database has been backed up, you are ready to create your availability group,
which automatically takes a full backup and transaction log backup from the primary
SQL Server replica and restores it on the secondary SQL Server instance with the
NORECOVERY option.

To create your availability group, follow these steps.

1. In Object Explorer in SQL Server Management Studio (SSMS) on the first SQL
Server VM (such as SQL-VM-1), right-click Always On High Availability and select
New Availability Group Wizard.
2. On the Introduction page, select Next. In the Specify availability group Name
page, type a name for the availability group in Availability group name, such as
AG1. Select Next.

3. On the Select Databases page, select your database, and then select Next. If your
database does not meet the prerequisites, make sure it's in full recovery mode, and
take a backup:
4. On the Specify Replicas page, select Add Replica.

5. The Connect to Server dialog pops up. Type the name of the second server in
Server name, such as SQL-VM-2. Select Connect.

6. On the Specify Replicas page, check the boxes for Automatic Failover and choose
Synchronous commit for the availability mode from the drop-down:
7. Select the Endpoints tab to confirm the ports used for the database mirroring
endpoint are those you opened in the firewall:

8. Select the Listener tab and choose to Create an availability group listener using
the following values for the listener:

Field Value

Listener DNS Name: AG1-Listener

Port Use the default SQL Server port. 1433

Network Mode: Static IP

9. Select Add to provide the secondary dedicated IP address for the listener for both
SQL Server VMs.

The following table shows the example IP addresses created for the listener from
the prerequisites document (though your specific IP addresses may vary):
VM Subnet Subnet address Secondary IP name Secondary IP
Name name range address

SQL-VM- SQL- 10.38.1.0/24 availability-group- 10.38.1.11


1 subnet-1 listener

SQL-VM- SQL- 10.38.2.0/24 availability-group- 10.38.2.11


2 subnet-2 listener

10. Choose the first subnet (such as 10.38.1.0/24) from the drop-down on the Add IP
address dialog box and then provide the secondary dedicated listener IPv4
address, such as 10.38.1.11 . Select OK.

11. Repeat this step again, but choose the other subnet from the drop-down (such as
10.38.2.0/24), and provide the secondary dedicated listener IPv4 address from the
other SQL Server VM, such as 10.38.2.11 . Select OK.

12. After reviewing the values on the Listener page, select Next:
13. On the Select Initial Data Synchronization page, choose Full database and log
backup and provide the network share location you created previously, such as
\\SQL-VM-1\Backup .
7 Note

Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, full
synchronization is not recommended because it may take a long time. You
can reduce this time by manually taking a backup of the database and
restoring it with NO RECOVERY . If the database is already restored with NO
RECOVERY on the second SQL Server before configuring the availability group,

choose Join only. If you want to take the backup after configuring the
availability group, choose Skip initial data synchronization.

14. On the Validation page, confirm that all validation checks have passed, and then
choose Next:
15. On the Summary page, select Finish and wait for the wizard to configure your new
availability group. Choose More details on the Progress page to view the detailed
progress. When you see that the wizard completed successfully on the Results
page, inspect the summary to verify the availability group and listener were
created successfully.

16. Select Close to exit the wizard.


Check availability group
You can check the health of the availability group by using SQL Server Management
Studio, and the Failover Cluster Manager.

To check the status of the availability group, follow these steps:

1. In Object Explorer, expand Always On High Availability, and then expand


availability groups. You should now see the new availability group in this
container. Right-click the availability group and select Show Dashboard.

The availability group dashboard shows the replica, the failover mode of each
replica, and the synchronization state, such as the following example:

2. Open the Failover Cluster Manager, select your cluster, and choose Roles to view
the availability group role you created within the cluster. Choose the role AG1 and
select the Resources tab to view the listener and the associated IP addresses, such
as the following example:
At this point, you have an availability group with replicas on two instances of SQL Server
and a corresponding availability group listener as well. You can connect using the
listener and you can move the availability group between instances using SQL Server
Management Studio.

2 Warning

Do not try to fail over the availability group by using the Failover Cluster Manager.
All failover operations should be performed from within SQL Server Management
Studio, such as by using the Always On Dashboard or Transact-SQL (T-SQL). For
more information, see Restrictions for using the Failover Cluster Manager with
availability groups.

Test listener connection


After your availability group is ready, and your listener has been configured with the
appropriate secondary IP addresses, test the connection to the listener.

To test the connection, follow these steps:

1. Use RDP to connect to a SQL Server that is in the same virtual network, but does
not own the replica, such as the other SQL Server instance within the cluster, or
any other VM with SQL Server Management Studio installed to it.

2. Open SQL Server Management Studio, and in the Connect to Server dialog box
type the name of the listener (such as AG1-Listener) in Server name:, and then
select Options:

3. Enter MultiSubnetFailover=True in the Additional Connection Parameters window


and then choose Connect to automatically connect to whichever instance is
hosting the primary SQL Server replica:

7 Note
While connecting to availability group on different subnets, setting
MultiSubnetFailover=true provides faster detection of and connection to the

current primary replica. See Connecting with MultiSubnetFailover

Next steps
Now that you've configured your multi-subnet availability group, if needed, you can
extend this across multiple regions.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
HADR settings for SQL Server on Azure VMs
Configure a multi-subnet availability
group across Azure regions - SQL Server
on Azure VMs
Article • 03/03/2023

Applies to:
SQL Server on Azure VM

This tutorial explains how to configure an Always On availability group replica for SQL
Server on Azure Virtual Machines (VMs) in an Azure region that is remote to the primary
replica. You can use this configuration for disaster recovery (DR).

You can also use the steps in this article to extend an existing on-premises availability
group to Azure.

This tutorial builds on the tutorial to manually deploy an availability group in multiple
subnets in a single region. Mentions of the local region in this article refer to the virtual
machines and availability group already configured in the first region. The remote
region is the new infrastructure that's being added in this tutorial.

Overview
The following image shows a common deployment of an availability group on Azure
virtual machines:
In the deployment shown in the diagram, all virtual machines are in one Azure region.
The availability group replicas can have synchronous commit with automatic failover on
SQL-VM-1 and SQL-VM-2. To build this architecture, see the availability group template
or tutorial.

This architecture is vulnerable to downtime if the Azure region becomes inaccessible. To


overcome this vulnerability, add a replica in a different Azure region. The following
diagram shows how the new architecture looks:

The diagram shows a new virtual machine called SQL-VM-3. SQL-VM-3 is in a different
Azure region. It's added to the Windows Server failover cluster and can host an
availability group replica. In this architecture, the replica in the remote region is normally
configured with asynchronous commit availability mode and manual failover mode.

7 Note

An Azure availability set is required when more than one virtual machine is in the
same region. If only one virtual machine is in the region, the availability set is not
required.

You can place a virtual machine in an availability set only at creation time. If the
virtual machine is already in an availability set, you can add a virtual machine for an
additional replica later.

When availability group replicas are on Azure virtual machines in different Azure
regions, you can connect the virtual networks by using virtual network peering or a site-
to-site VPN gateway.

) Important

This architecture incurs outbound data charges for data replicated between Azure
regions. See Bandwidth pricing .

Create the network and subnet


Before you create a virtual network and subnet in a new region, decide on the address
space, subnet network, cluster IP, and availability group listener IP addresses that you'll
use for the remote region.

The following table lists details for the local (current) region and what will be set up in
the new remote region.

Type Local Remote region

Address space 10.38.0.0/16 10.19.0.0/16

DC Subnet network 10.38.0.0/24 10.19.0.0/24

SQL Subnet 1 network 10.38.1.0/24 10.19.1.0/24

SQL Subnet 2 network 10.38.2.0/24 n/a

Cluster IP 1 10.38.1.10 10.19.1.10


Type Local Remote region

Cluster IP 2 10.38.2.10 n/a

Availability group listener IP 1 10.38.1.11 10.19.1.11

Availability group listener IP 1 10.38.2.11 n/a

To create a virtual network and subnet in the new region in the Azure portal:

1. Go to your resource group in the Azure portal and select + Create.

2. Search for virtual network in the Marketplace search box, and then select the
virtual network tile from Microsoft.

3. On the Create virtual network page, select Create. Then enter the following
information on the Basics tab:
a. Under Project details, for Subscription, select the appropriate Azure
subscription. For Resource group, select the resource group that you created
previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
remote_HAVNET. Then choose a new remote region.

4. On the IP addresses tab, select the ellipsis (...) next to + Add a subnet. Select
Delete address space to remove the existing address space, if you need a different
address range.

5. Select Add an IP address space to open the pane to create the address space that
you need. This tutorial uses the address space of the remote region: 10.19.0.0/16.
Select Add.

6. Add subnets for the domain controller and the SQL Server.

a. Select + Add a subnet

b. Provide a value for the Subnet name, such as DC-Subnet.

c. Provide a unique subnet address range within the virtual network address
space.

For example, if your address range is 10.19.0.0/16, enter these values for the DC-
Subnet subnet: 10.19.1.0 for Starting address and /24 for Subnet size.
d. Select Add to add your new subnet.

e. Repeat the process for the SQL-subnet1. When complete, you should have a
subnet for the domain controller in the remote region and a subnet for each
SQL Server in the remote region. For example, in this tutorial, the remote region
virtual network contains:

7. Select Review + create to create the virtual network.

Configure virtual network DNS


After you create the virtual network, configure it to use the DNS server from the local or
primary domain controller.

To configure your virtual network for DNS, follow these steps:

1. Go to your resource group in the Azure portal , and select your virtual network,
such as remote-HAVNET.
2. Select DNS servers under the Settings pane and then select Custom.
3. Enter the private IP address you identified previously in the IP Address field, such
as 10.38.0.4 .
4. Select Save.

Connect the virtual networks in the two Azure


regions
After you create the new virtual network and subnet, you're ready to connect the two
regions so they can communicate with each other. There are two methods to do this:

Connect virtual networks with virtual network peering by using the Azure portal
(recommended)

In some cases, you might have to use PowerShell to create the connection
between virtual networks. For example, if you use different Azure accounts, you
can't configure the connection in the portal. In this case, review Configure a
network-to-network connection by using the Azure portal.

Configure a site-to-site VPN gateway connection by using the Azure portal

This tutorial uses virtual network peering. To configure virtual network peering:

1. In the search box at the top of the Azure portal, type autoHAVNET, which is the
virtual network in your local region. When autoHAVNET appears in the search
results, select it.

2. Under Settings, select Peerings, and then select + Add.


3. Enter or select the following information, accept the defaults for the remaining
settings, and then select Add.

Setting Value

This virtual
network

Peering link Enter autoHAVNET-remote_HAVNET for the name of the peering from
name autoHAVNET to the remote virtual network.

Remote
virtual
network

Peering link Enter remote_HAVNET-autoHAVNET for the name of the peering from the
name remote virtual network to autoHAVNET.

Subscription Select your subscription for the remote virtual network.

Virtual Select remote_HAVNET for the name of the remote virtual network. The
network remote virtual network can be in the same region of autoHAVNET or in a
different region.

4. On the Peerings page, Peering status is Connected.


If you don't see a Connected status, select the Refresh button.

Create a domain controller


A domain controller in the new region is necessary to provide authentication if the
primary site isn't available. To create the domain controller in the new region:

1. Return to the SQL-HA-RG resource group.


2. Select + Create.
3. Type Windows Server 2016 Datacenter, and then select the Windows Server 2016
Datacenter result.
4. In Windows Server 2016 Datacenter, verify that the deployment model is Resource
Manager, and then select Create.

The following table shows the settings for the two machines:

Setting Value

Name Remote domain controller: DC-VM-3

VM disk type SSD

User name DomainAdmin

Password Contoso!0000

Subscription Your subscription

Resource group SQL-HA-RG

Location Your location

Size DS1_V2

Storage Use managed disks: Yes

Virtual network remote_HAVNET

Subnet DC-subnet
Setting Value

Public IP address Same name as the VM

Network security group Same name as the VM

Diagnostics Enabled

Diagnostics storage account Automatically created

Azure creates the virtual machine.

Configure the domain controller


In the following steps, configure the DC-VM-3 machine as a domain controller for
corp.contoso.com:

Set preferred DNS server address

The preferred DNS server address shouldn't be updated directly within a VM, it should
be edited from the Azure portal, or PowerShell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:

1. Sign-in to the Azure portal .

2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.

3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.

4. In Settings, select DNS servers.

5. Since this domain controller isn't in the same virtual network as the primary
domain controller select Custom and input the IP address of the local domain
controller, such as 10.38.0.4 . The DNS server address you specify is assigned only
to this network interface and overrides any DNS setting for the virtual network the
network interface is assigned to.

6. Select Save.

7. Return to the virtual machine in the Azure portal and restart the VM. Once the
virtual machine has restarted, you can join the VM to the domain.
Join the domain
Next, join the corp.contoso.com domain. To do so, follow these steps:

1. Remotely connect to the virtual machine using the BUILTIN\DomainAdmin


account.
2. Open Server Manager, and select Local Server.
3. Select WORKGROUP.
4. In the Computer Name section, select Change.
5. Select the Domain checkbox and type corp.contoso.com in the text box. Select
OK.
6. In the Windows Security popup dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the popup dialog.

Configure domain controller

Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:

1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).

2. Select the Add roles and features link on the dashboard.

3. Select Next until you get to the Server Roles section.


4. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.

5. After the features finish installing, return to the Server Manager dashboard.

6. Select the new AD DS option on the left-hand pane.

7. Select the More link on the yellow warning bar.

8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.

9. Under Deployment Configuration, select Add a domain controller to an existing


domain.

10. Select Select.

11. Connect by using the administrator account


(CORP.CONTOSO.COM\domainadmin) and password (Contoso!0000).

12. In Select a domain from the forest, choose your domain and then select OK.

13. In Domain Controller Options, use the default values and set a DSRM password.

7 Note

The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.

14. Select Next until the dialog reaches the Prerequisites check. Then select Install.

After the server finishes the configuration changes, restart the server.

Add second DC IP address to DNS


After your remote domain controller is configured, follow the same steps as before to
identify the private IP address of the VM, and add the private IP address as a secondary
custom DNS server in the virtual networks (both the local and remote virtual networks)
of your resource group. Adding the secondary DNS server in the Azure portal enables
redundancy of the DNS service.

Create a SQL Server VM


After the domain controller restarts, the next step is to create a SQL Server virtual
machine in the new region.

Before you proceed, consider the following design decisions:

Availability - Availability Zones

For the highest level of redundancy, resiliency and availability deploy the VMs within
separate Availability Zones. Availability Zones are unique physical locations within an
Azure region. Each zone is made up of one or more datacenters with independent
power, cooling, and networking. For Azure regions that don't support Availability Zones
yet, use Availability Sets instead. Place all the VMs within the same Availability Set.

Storage - Azure Managed Disks

For the virtual machine storage, use Azure Managed Disks. Microsoft recommends
Managed Disks for SQL Server virtual machines as they handle storage behind the
scenes. For more information, see Azure Managed Disks Overview.

Network - Private IP addresses in production

For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to the virtual machine over the internet and makes
configuration steps easier. In production environments, Microsoft recommends only
private IP addresses in order to reduce the vulnerability footprint of the SQL Server
instance VM resource.

Network - Single NIC per server

Use a single NIC per server (cluster node). Azure networking has physical redundancy,
which makes additional NICs unnecessary on a failover cluster deployed to an Azure
virtual machine. The cluster validation report will warn you that the nodes are reachable
only on a single network. You can ignore this warning when your failover cluster is on
Azure virtual machines.

Create and configure the SQL Server VM


To create the SQL Server VM, go back to the SQL-HA-RG resource group, and then
select Add. Search for the appropriate gallery item, select Virtual Machine, and then
select From Gallery. Use the information in the following table to help you create the
VMs:

Page Setting
Page Setting

Select the appropriate gallery item SQL Server 2016 SP1 Enterprise on Windows Server 2016

Virtual machine configuration: Name = SQL-VM-3

Basics
User Name = DomainAdmin

Password = Contoso!0000

Subscription = Your subscription

Resource group = SQL-HA-RG

Location = Your remote region

Virtual machine configuration: Size Size = DS2_V2 (2 vCPUs, 7 GB)

The size must support SSD storage (premium disk


support).

Virtual machine configuration: Storage: Use managed disks

Settings
Virtual network = remote-HAVNET

Subnet = SQL-subnet1 (10.19.1.0/24)

Public IP address = Automatically generated

Network security group = None

Monitoring Diagnostics = Enabled

Diagnostics storage account = Use an automatically


generated storage account

Virtual machine configuration: SQL SQL connectivity = Private (within Virtual Network)

Server settings
Port = 1433

SQL Authentication = Disabled

Storage configuration = General

Automated patching = Sunday at 2:00

Automated backup = Disabled

Azure Key Vault integration = Disabled


7 Note

The machine size suggested here is meant for testing availability groups in Azure
virtual machines. For the best performance on production workloads, see the
recommendations for SQL Server machine sizes and configuration in Checklist: Best
practices for SQL Server on Azure VMs.

After the VM is fully provisioned, you need to configure it, join it to the
corp.contoso.com domain, and grant CORP\Install administrative rights to the
machines.

Configure SQL Server VMs


After VM creation completes, configure your SQL Server VMs by adding a secondary IP
address to each VM, and joining them to the domain.

Add secondary IPs to SQL Server VMs


In the multi-subnet environment, assign secondary IP addresses to each SQL Server VM
to use for the availability group listener, and for Windows Server 2016 and earlier, assign
secondary IP addresses to each SQL Server VM for the cluster IP address as well. Doing
this negates the need for an Azure Load Balancer, as is the requirement in a single
subnet environment.

On Windows Server 2016 and earlier, you need to assign an additional secondary IP
address to each SQL Server VM to use for the windows cluster IP since the cluster uses
the Cluster Network Name rather than the default Distributed Network Name (DNN)
introduced in Windows Server 2019. With a DNN, the cluster name object (CNO) is
automatically registered with the IP addresses for all the nodes of the cluster,
eliminating the need for a dedicated windows cluster IP address.

If you're on Windows Server 2016 and prior, follow the steps in this section to assign a
secondary IP address to each SQL Server VM for both the availability group listener, and
the cluster.

) Important

If you're on Windows Server 2019 or later, only assign a secondary IP address for
the availability group listener, and skip the steps to assign a windows cluster IP,
unless you plan to configure your cluster with a virtual network name (VNN), in
which case assign both IP addresses to each SQL Server VM as you would for
Windows Server 2016.

To assign additional secondary IPs to the VMs, follow these steps:

1. Go to your resource group in the Azure portal and select the SQL Server VM,
SQL-VM-3.

2. Select Networking in the Settings pane, and then select the Network Interface.

3. On the Network Interface page, select IP configurations in the Settings pane and
then choose + Add to add an additional IP address.

4. On the Add IP configuration page, do the following:


a. Specify the Name as the Windows Cluster IP, such as windows-cluster-ip for
Windows 2016 and earlier. Skip this step if you're on Windows Server 2019 or
later.
b. Set the Allocation to Static.
c. Enter an unused IP address in the same subnet (SQL-subnet-1) as the SQL
Server VM (SQL-VM-1), such as 10.19.1.10 .
d. Leave the Public IP address at the default of Disassociate.
e. Select OK to finish adding the IP configuration.

5. Select + Add again to configure an additional IP address for the availability group
listener (with a name such as availability-group-listener), again specifying an
unused IP address in SQL-subnet-1 such as 10.19.1.11 .

Now you're ready to join the corp.contoso.com.


Join the server to the domain
To join the VM to corp.contoso.com, use the following steps for the SQL Server VM:

1. Remotely connect to the virtual machine by using BUILTIN\DomainAdmin.


2. In Server Manager, select Local Server.
3. Select the WORKGROUP link.
4. In the Computer Name section, select Change.
5. Select the Domain check box, and enter corp.contoso.com in the text box. Then
select OK.
6. In the Windows Security pop-up dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the pop-up dialog.

Add accounts
The next task is to add the installation account as an administrator on the SQL Server
VM, and then grant permission to that account and to local accounts within SQL Server.
You can then update the SQL Server service account.

Add the CORP\Install user as an administrator on each


cluster VM
After the SQL Server virtual machine restarts as a member of the domain, add
CORP\Install as a member of the local administrators group:

1. Wait until the VM is restarted, and then open the RDP file again from the primary
domain controller. Sign in to SQL-VM-3 by using the CORP\DomainAdmin
account.

 Tip

In earlier steps, you were using the BUILTIN administrator account. Now that
the server is in the domain, make sure that you sign in with the domain
administrator account. In your RDP session, specify DOMAIN\username.

2. In Server Manager, select Tools, and then select Computer Management.


3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.

4. Double-click the Administrators group.

5. In the Administrator Properties dialog, select the Add button.

6. Enter the user as CORP\Install, and then select OK.

7. Select OK to close the Administrator Properties dialog.

Create a sign-in on each SQL Server VM for the


installation account
Use the installation account (CORP\Install) to configure the availability group. This
account needs to be a member of the sysadmin fixed server role on each SQL Server
VM. The following steps create a sign-in for the installation account. Complete them on
both SQL Server VMs.

1. Connect to the server through RDP by using the <MachineName>\DomainAdmin


account.

2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.

3. In Object Explorer, select Security.

4. Right-click Logins. Select New Login.

5. In Login - New, select Search.

6. Select Locations.

7. Enter the domain administrator's network credentials. Use the installation account
(CORP\Install).

8. Set the sign-in to be a member of the sysadmin fixed server role.

9. Select OK.

Configure system account permissions


To create a system account and grant appropriate permissions, complete the following
steps on each SQL Server instance:
1. Use the following script to create an account for [NT AUTHORITY\SYSTEM] :

SQL

USE [master]

GO

CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE=


[master]

GO

2. Grant the following permissions to [NT AUTHORITY\SYSTEM] :

ALTER ANY AVAILABILITY GROUP

CONNECT SQL
VIEW SERVER STATE

The following script grants these permissions:

SQL

GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]

GO

GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]

GO

GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]

GO

Set the SQL Server service accounts


On each SQL Server VM, complete the following steps to set the SQL Server service
account. Use the accounts that you created when you configured the domain accounts.

1. Open SQL Server Configuration Manager.


2. Right-click the SQL Server service, and then select Properties.
3. Set the account and password.

For SQL Server availability groups, each SQL Server VM needs to run as a domain
account.

Add failover clustering to SQL Server VM


To add failover clustering features, complete the following steps on both SQL Server
VMs:
1. Connect to the SQL Server virtual machine through RDP by using the CORP\Install
account. Open the Server Manager dashboard.

2. Select the Add roles and features link on the dashboard.

3. Select Next until you get to the Server Features section.

4. In Features, select Failover Clustering.

5. Add any required features.

6. Select Install.

7 Note

You can now automate this task, along with actually joining the SQL Server VMs to
the failover cluster, by using the Azure CLI and Azure quickstart templates.

Tune network thresholds for a failover cluster


When you're running Windows failover cluster nodes in Azure VMs with SQL Server
availability groups, change the cluster setting to a more relaxed monitoring state. This
change will make the cluster more stable and reliable. For details, see IaaS with SQL
Server: Tuning failover cluster network thresholds.

Configure the firewall on each SQL Server VM


The availability group feature relies on traffic through the following TCP ports:
SQL Server VM: Port 1433 for a default instance of SQL Server.
Database mirroring endpoint: Any available port. Examples frequently use 5022.

Open these firewall ports on both SQL Server VMs. The method of opening the ports
depends on the firewall solution that you use, and may vary from the Windows Firewall
example provided in this section.

To open these ports on a Windows Firewall, follow these steps:

1. On the first SQL Server Start screen, launch Windows Firewall with Advanced
Security.

2. On the left pane, select Inbound Rules. On the right pane, select New Rule.

3. For Rule Type, choose Port.

4. For the port, specify TCP and type the appropriate port numbers. See the following
example:

5. Select Next.

6. On the Action page, select Allow the connection , and then select Next.

7. On the Profile page, accept the default settings, and then select Next.
8. On the Name page, specify a rule name (such as SQL Inbound) in the Name text
box, and then select Finish.

Add SQL Server to the Windows Server failover


cluster
The new SQL Server VM needs to be added to the Windows Server failover cluster that
exists in your local region.

To add the SQL Server VM to the cluster:

1. Use RDP to connect to a SQL Server VM in the existing cluster. Use a domain
account that's an administrator on both SQL Server VMs and the witness server.

2. On the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.

3. On the left pane, right-click Failover Cluster Manager, and then select Connect to
Cluster.

4. In the Select Cluster window, under Cluster name, choose <Cluster on this
server>. Then select OK.

5. In the browser tree, right-click the cluster and select Add Node.

6. In the Add Node Wizard, select Next.

7. On the Select Servers page, add the name of the new SQL Server instance. Enter
the server name in Enter server name, select Add, and then select Next.

8. On the Validation Warning page, select No. (In a production scenario, you should
perform the validation tests). Then, select Next.

9. On the Confirmation page, if you're using Storage Spaces, clear the Add all
eligible storage to the cluster checkbox.

2 Warning

If you don't clear Add all eligible storage to the cluster, Windows detaches
the virtual disks during the clustering process. As a result, they don't appear in
Disk Manager or Explorer until the storage is removed from the cluster and
reattached via PowerShell.
10. Select Next.

11. Select Finish.

Failover Cluster Manager shows that your cluster has a new node and lists it in the
Nodes container.

Add the IP address for the Windows Server failover


cluster

7 Note

On Windows Server 2019, the cluster creates a distributed server name instead of a
cluster network name. If you're using Windows Server 2019, skip to Add an IP
address for the availability group listener. You can create a cluster network name
by using PowerShell. For more information, review the blog post Failover Cluster:
Cluster Network Object .

Next, create the IP address resource and add it to the cluster for the new SQL Server VM:

1. In Failover Cluster Manager, select the name of the cluster. Right-click the cluster
name under Cluster Core Resources, and then select Properties:

2. In the Cluster Properties dialog, select Add under IP Addresses, and then add the
IP address of the cluster name from the remote network region. Select OK in the IP
Address dialog, and then select OK in the Cluster Properties dialog to save the
new IP address.

3. Add the IP address as a dependency for the cluster core name.

Open the Cluster Properties dialog once more, and select the Dependencies tab.
Configure an OR dependency for the two IP addresses.

Add an IP address for the availability group listener


The IP address for the listener in the remote region needs to be added to the cluster. To
add the IP address:

1. In Failover Cluster Manager, right-click the availability group role. Point to Add
Resource, point to More Resources, and then select IP Address.

2. To configure this IP address, right-click the resource under Other Resources, and
then select Properties.

3. For Name, enter a name for the new resource. For Network, select the network
from the remote datacenter. Select Static IP Address, and then in the Address box,
assign the static IP address that you previously selected for the listener, in this
tutorial is it 10.19.1.11.

4. Select Apply, and then select OK.

5. Add the IP address resource as a dependency for the listener client access point
(network name) cluster.

Right-click the listener client access point, and then select Properties. Browse to
the Dependencies tab and add the new IP address resource to the listener client
access point. The following screenshot shows a properly configured IP address
cluster resource:

) Important

The cluster resource group includes both IP addresses. Both IP addresses are
dependencies for the listener client access point. Use the OR operator in the
cluster dependency configuration.

Enable availability groups


Next, enable the Always On availability groups feature. Complete these steps on the new
SQL Server VM:

1. From the Start screen, open SQL Server Configuration Manager.

2. In the browser tree, select SQL Server Services. Right-click the SQL Server
(MSSQLSERVER) service, and then select Properties.

3. Select the AlwaysOn High Availability tab, and then select Enable AlwaysOn
Availability Groups.
4. Select Apply. Select OK in the pop-up dialog.

5. Restart the SQL Server service.

Add a replica to the availability group


After SQL Server has restarted on the newly created virtual machine, you can add it as a
replica to the availability group:

1. Open a remote desktop session to the primary SQL Server instance in the
availability group, and then open SQL Server Management Studio (SSMS).

2. In Object Explorer in SSMS, open Always On High Availability > Availability


Groups. Right-click your availability group name, and then select Add Replica.

3. Connect to the existing replica, and then select Next.

4. Select Add Replica and connect to the new SQL Server VM.

) Important
A replica in a remote Azure region should be set to asynchronous replication
with manual failover.

5. On the Select Initial Data Synchronization page, select Full and specify a shared
network location. For the location, use the backup share that you created. In the
example, it was \\<First SQL Server>\Backup\. Then select Next.

7 Note

Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, we
don't recommend full synchronization because it might take a long time.

You can reduce this time by manually backing up the database and restoring
it with NO RECOVERY . If the database is already restored with NO RECOVERY on
the second SQL Server instance before you configure the availability group,
select Join only. If you want to take the backup after you configure the
availability group, select Skip initial data synchronization.

6. On the Validation page, select Next. This page should look similar to the following
image:

7 Note
A warning for the listener configuration says you haven't configured an
availability group listener. You can ignore this warning because the listener is
already set up.

7. On the Summary page, select Finish, and then wait while the wizard configures the
new availability group. On the Progress page, you can select More details to view
the detailed progress.

After the wizard finishes the configuration, inspect the Results page to verify that
the availability group is successfully created.

8. Select Close to close the wizard.

Check the availability group


In Object Explorer, expand Always On High Availability, and then expand Availability
Groups. Right-click the availability group and select Show Dashboard.

Your availability group dashboard should look similar to the following screenshot, now
with another replica:

The dashboard shows the replicas, the failover mode of each replica, and the
synchronization state.
Check the availability group listener
1. In Object Explorer, expand Always On High Availability, expand Availability
Groups, and then expand Availability Group Listener.

2. Right-click the listener name and select Properties. All IP addresses should now
appear for the listener (one in each region).

Set the connection for multiple subnets


The replica in the remote datacenter is part of the availability group, but it's in a
different subnet. If this replica becomes the primary replica, application connection
time-outs might occur. This behavior is the same as an on-premises availability group in
a multiple-subnet deployment. To allow connections from client applications, either
update the client connection or configure name resolution caching on the cluster
network name resource.

Preferably, update the cluster configuration to set RegisterAllProvidersIP=1 and the


client connection strings to set MultiSubnetFailover=Yes . See Connecting with
MultiSubnetFailover.
If you can't modify the connection strings, you can configure name resolution caching.
See Timeout occurs when you connect to an Always On listener in a multi-subnet
environment .

Fail over to the remote region


To test listener connectivity to the remote region, you can fail the replica over to the
remote region. While the replica is asynchronous, failover is vulnerable to potential data
loss. To fail over without data loss, change the availability mode to synchronous and set
the failover mode to automatic. Use the following steps:

1. In Object Explorer, connect to the instance of SQL Server that hosts the primary
replica.

2. Under Always On Availability Groups, right-click your availability group and select
Properties.

3. On the General page, under Availability Replicas, set the secondary replica on the
disaster recovery (DR) site to use Synchronous Commit availability mode and
Automatic failover mode.

If you have a secondary replica in same site as your primary replica for high
availability, set this replica to Asynchronous Commit and Manual.

4. Select OK.

5. In Object Explorer, right-click the availability group and select Show Dashboard.

6. On the dashboard, verify that the replica on the DR site is synchronized.

7. In Object Explorer, right-click the availability group and select Failover. SQL Server
Management Studio opens a wizard to fail over SQL Server.

8. Select Next, and select the SQL Server instance on the DR site. Select Next again.

9. Connect to the SQL Server instance on the DR site, and then select Next.

10. On the Summary page, verify the settings and select Finish.

After you test connectivity, move the primary replica back to your primary datacenter
and set the availability mode back to its normal operating settings. The following table
shows the normal operating settings for the architecture described in this article:

Location Server Role Availability Failover


instance mode mode
Location Server Role Availability Failover
instance mode mode

Primary datacenter SQL-VM-1 Primary Synchronous Automatic

Primary datacenter SQL-VM-2 Secondary Synchronous Automatic

Secondary or remote SQL-VM-3 Secondary Asynchronous Manual


datacenter

For more information about planned and forced manual failover, see the following
articles:

Perform a planned manual failover of an availability group (SQL Server)


Perform a forced manual failover of an availability group (SQL Server)

Next steps
To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Use PowerShell or Az CLI to configure
an availability group for SQL Server on
Azure VM
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article describes how to use PowerShell or the Azure CLI to deploy a Windows
failover cluster, add SQL Server VMs to the cluster, and create the internal load balancer
and listener for an Always On availability group within a single subnet.

Deployment of the availability group is still done manually through SQL Server
Management Studio (SSMS) or Transact-SQL (T-SQL).

While this article uses PowerShell and the Az CLI to configure the availability group
environment, it is also possible to do so from the Azure portal, using Azure Quickstart
templates, or Manually as well.

7 Note

It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs using Azure Migrate. See Migrate availability group to learn more.

Prerequisites
To configure an Always On availability group, you must have the following prerequisites:

An Azure subscription .
A resource group with a domain controller.
One or more domain-joined VMs in Azure running SQL Server 2016 (or later)
Enterprise edition in the same availability set or different availability zones that
have been registered with the SQL IaaS Agent extension.
The latest version of PowerShell or the Azure CLI.
Two available (not used by any entity) IP addresses. One is for the internal load
balancer. The other is for the availability group listener within the same subnet as
the availability group. If you're using an existing load balancer, you only need one
available IP address for the availability group listener.
Windows Server Core is not a supported operating system for the PowerShell
commands referenced in this article as there is a dependency on RSAT, which is not
included in Core installations of Windows.

Permissions
You need the following account permissions to configure the Always On availability
group by using the Azure CLI:

An existing domain user account that has Create Computer Object permission in
the domain. For example, a domain admin account typically has sufficient
permission (for example: account@domain.com). This account should also be part
of the local administrator group on each VM to create the cluster.
The domain user account that controls SQL Server.

Create a storage account


The cluster needs a storage account to act as the cloud witness. You can use any existing
storage account, or you can create a new storage account. If you want to use an existing
storage account, skip ahead to the next section.

The following code snippet creates the storage account:

Azure CLI

Azure CLI

# Create the storage account

# example: az storage account create -n 'cloudwitness' -g SQLVM-RG -l


'West US' `

# --sku Standard_LRS --kind StorageV2 --access-tier Hot --https-only


true

az storage account create -n <name> -g <resource group name> -l <region>


`

--sku Standard_LRS --kind StorageV2 --access-tier Hot --https-only


true

 Tip

You might see the error az sql: 'vm' is not in the 'az sql' command group if
you're using an outdated version of the Azure CLI. Download the latest version
of Azure CLI to get past this error.

Define cluster metadata


The Azure CLI az sql vm group command group manages the metadata of the Windows
Server Failover Cluster (WSFC) service that hosts the availability group. Cluster metadata
includes the Active Directory domain, cluster accounts, storage accounts to be used as
the cloud witness, and SQL Server version. Use az sql vm group create to define the
metadata for WSFC so that when the first SQL Server VM is added, the cluster is created
as defined.

The following code snippet defines the metadata for the cluster:

Azure CLI

Azure CLI

# Define the cluster metadata

# example: az sql vm group create -n Cluster -l 'West US' -g SQLVM-RG `

# --image-offer SQL2017-WS2016 --image-sku Enterprise --domain-fqdn


domain.com `

# --operator-acc vmadmin@domain.com --bootstrap-acc vmadmin@domain.com


--service-acc sqlservice@domain.com `

# --sa-key '4Z4/i1Dn8/bpbseyWX' `

# --storage-account 'https://cloudwitness.blob.core.windows.net/'

az sql vm group create -n <cluster name> -l <region ex:eastus> -g


<resource group name> `

--image-offer <SQL2016-WS2016 or SQL2017-WS2016> --image-sku


Enterprise --domain-fqdn <FQDN ex: domain.com> `

--operator-acc <domain account ex: testop@domain.com> --bootstrap-acc


<domain account ex:bootacc@domain.com> `

--service-acc <service account ex: testservice@domain.com> `

--sa-key '<PublicKey>' `

--storage-account '<ex:https://cloudwitness.blob.core.windows.net/>'

Add VMs to the cluster


Adding the first SQL Server VM to the cluster creates the cluster. The az sql vm add-to-
group command creates the cluster with the name previously given, installs the cluster
role on the SQL Server VMs, and adds them to the cluster. Subsequent uses of the az
sql vm add-to-group command add more SQL Server VMs to the newly created cluster.

The following code snippet creates the cluster and adds the first SQL Server VM to it:

Azure CLI

Azure CLI

# Add SQL Server VMs to cluster

# example: az sql vm add-to-group -n SQLVM1 -g SQLVM-RG --sqlvm-group


Cluster `

# -b Str0ngAzur3P@ssword! -p Str0ngAzur3P@ssword! -s
Str0ngAzur3P@ssword!

# example: az sql vm add-to-group -n SQLVM2 -g SQLVM-RG --sqlvm-group


Cluster `

# -b Str0ngAzur3P@ssword! -p Str0ngAzur3P@ssword! -s
Str0ngAzur3P@ssword!

az sql vm add-to-group -n <VM1 Name> -g <Resource Group Name> --sqlvm-


group <cluster name> `

-b <bootstrap account password> -p <operator account password> -s


<service account password>

az sql vm add-to-group -n <VM2 Name> -g <Resource Group Name> --sqlvm-


group <cluster name> `

-b <bootstrap account password> -p <operator account password> -s


<service account password>

Use this command to add any other SQL Server VMs to the cluster. Modify only the
-n parameter for the SQL Server VM name.

Configure quorum
Although the disk witness is the most resilient quorum option, it requires an Azure
shared disk which imposes some limitations to the availability group. As such, the cloud
witness is the recommended quorum solution for clusters hosting availability groups for
SQL Server on Azure VMs.

If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.
Validate cluster
For a failover cluster to be supported by Microsoft, it must pass cluster validation.
Connect to the VM using your preferred method, such as Remote Desktop Protocol
(RDP) and validate that your cluster passes validation before proceeding further. Failure
to do so leaves your cluster in an unsupported state.

You can validate the cluster using Failover Cluster Manager (FCM) or the following
PowerShell command:

PowerShell

Test-Cluster –Node ("<node1>","<node2>") –Include "Inventory", "Network",


"System Configuration"

Create availability group


Manually create the availability group as you normally would, by using SQL Server
Management Studio, PowerShell, or Transact-SQL.

) Important

Do not create a listener at this time because this is done through the Azure CLI in
the following sections.

Create internal load balancer

7 Note

Availability group deployments to multiple subnets don't require a load balancer.


In a single-subnet environment, customers who use SQL Server 2019 CU8 and later
on Windows 2016 and later can replace the traditional virtual network name (VNN)
listener and Azure Load Balancer with a distributed network name (DNN) listener.
If you want to use a DNN, skip any tutorial steps that configure Azure Load
Balancer for your availability group.

The Always On availability group listener requires an internal instance of Azure Load
Balancer. The internal load balancer provides a "floating" IP address for the availability
group listener that allows for faster failover and reconnection. If the SQL Server VMs in
an availability group are part of the same availability set, you can use a Basic load
balancer. Otherwise, you need to use a Standard load balancer.

7 Note

The internal load balancer should be in the same virtual network as the SQL Server
VM instances.

The following code snippet creates the internal load balancer:

Azure CLI

Azure CLI

# Create the internal load balancer

# example: az network lb create --name sqlILB -g SQLVM-RG --sku Standard


`

# --vnet-name SQLVMvNet --subnet default

az network lb create --name sqlILB -g <resource group name> --sku


Standard `

--vnet-name <VNet Name> --subnet <subnet name>

) Important

The public IP resource for each SQL Server VM should have a Standard SKU to be
compatible with the Standard load balancer. To determine the SKU of your VM's
public IP resource, go to Resource Group, select your Public IP Address resource
for the desired SQL Server VM, and locate the value under SKU in the Overview
pane.

Create listener
After you manually create the availability group, you can create the listener by using az
sql vm ag-listener.

The subnet resource ID is the value of /subnets/<subnetname> appended to the resource


ID of the virtual network resource. To identify the subnet resource ID:

1. Go to your resource group in the Azure portal .


2. Select the virtual network resource.
3. Select Properties in the Settings pane.
4. Identify the resource ID for the virtual network and append /subnets/<subnetname>
to the end of it to create the subnet resource ID. For example:

Your virtual network resource ID is:


/subscriptions/a1a1-
1a11a/resourceGroups/SQLVM-

RG/providers/Microsoft.Network/virtualNetworks/SQLVMvNet

Your subnet name is: default


Therefore, your subnet resource ID is:
/subscriptions/a1a1-
1a11a/resourceGroups/SQLVM-
RG/providers/Microsoft.Network/virtualNetworks/SQLVMvNet/subnets/default

The following code snippet creates the availability group listener:

Azure CLI

Azure CLI

# Create the availability group listener

# example: az sql vm group ag-listener create -n AGListener -g SQLVM-RG


`

# --ag-name SQLAG --group-name Cluster --ip-address 10.0.0.27 `

# --load-balancer sqlilb --probe-port 59999 `

# --subnet /subscriptions/a1a1-1a11a/resourceGroups/SQLVM-
RG/providers/Microsoft.Network/virtualNetworks/SQLVMvNet/subnets/default
`

# --sqlvms sqlvm1 sqlvm2

az sql vm group ag-listener create -n <listener name> -g <resource group


name> `

--ag-name <availability group name> --group-name <cluster name> --ip-


address <ag listener IP address> `

--load-balancer <lbname> --probe-port <Load Balancer probe port,


default 59999> `

--subnet <subnet resource id> `

--sqlvms <names of SQL VM's hosting AG replicas, ex: sqlvm1 sqlvm2>

Modify number of replicas


There's an added layer of complexity when you're deploying an availability group to SQL
Server VMs hosted in Azure. The resource provider and the virtual machine group now
manage the resources. As such, when you're adding or removing replicas in the
availability group, there's an additional step of updating the listener metadata with
information about the SQL Server VMs. When you're modifying the number of replicas
in the availability group, you must also use the az sql vm group ag-listener update
command to update the listener with the metadata of the SQL Server VMs.

Add a replica
To add a new replica to the availability group:

Azure CLI

1. Add the SQL Server VM to the cluster group:

Azure CLI

# Add the SQL Server VM to the cluster group

# example: az sql vm add-to-group -n SQLVM3 -g SQLVM-RG --sqlvm-


group Cluster `

# -b Str0ngAzur3P@ssword! -p Str0ngAzur3P@ssword! -s
Str0ngAzur3P@ssword!

az sql vm add-to-group -n <VM3 Name> -g <Resource Group Name> --


sqlvm-group <cluster name> `

-b <bootstrap account password> -p <operator account password> -s


<service account password>

2. Use SQL Server Management Studio to add the SQL Server instance as a
replica within the availability group.

3. Add the SQL Server VM metadata to the listener:

Azure CLI

# Update the listener metadata with the new VM

# example: az sql vm group ag-listener update -n AGListener `

# -g sqlvm-rg --group-name Cluster --sqlvms sqlvm1 sqlvm2 sqlvm3

az sql vm group ag-listener update -n <Listener> `

-g <RG name> --group-name <cluster name> --sqlvms <SQL VMs, along


with new SQL VM>

Remove a replica
To remove a replica from the availability group:
Azure CLI

1. Remove the replica from the availability group by using SQL Server
Management Studio.
2. Remove the SQL Server VM metadata from the listener:

Azure CLI

# Update the listener metadata by removing the VM from the SQLVMs


list

# example: az sql vm group ag-listener update -n AGListener `

# -g sqlvm-rg --group-name Cluster --sqlvms sqlvm1 sqlvm2

az sql vm group ag-listener update -n <Listener> `

-g <RG name> --group-name <cluster name> --sqlvms <SQL VMs that


remain>

3. Remove the SQL Server VM from the cluster:

Azure CLI

# Remove the SQL VM from the cluster

# example: az sql vm remove-from-group --name SQLVM3 --resource-


group SQLVM-RG

az sql vm remove-from-group --name <SQL VM name> --resource-group


<RG name>

Remove listener
If you later need to remove the availability group listener configured with the Azure CLI,
you must go through the SQL IaaS Agent extension. Because the listener is registered
through the SQL IaaS Agent extension, just deleting it via SQL Server Management
Studio is insufficient.

The best method is to delete it through the SQL IaaS Agent extension by using the
following code snippet in the Azure CLI. Doing so removes the availability group listener
metadata from the SQL IaaS Agent extension. It also physically deletes the listener from
the availability group.

Azure CLI

Azure CLI
# Remove the availability group listener

# example: az sql vm group ag-listener delete --group-name Cluster --


name AGListener --resource-group SQLVM-RG

az sql vm group ag-listener delete --group-name <cluster name> --name


<listener name > --resource-group <resource group name>

Remove cluster
Remove all of the nodes from the cluster to destroy it, and then remove the cluster
metadata from the SQL IaaS Agent extension. You can do so by using the Azure CLI or
PowerShell.

Azure CLI

First, remove all of the SQL Server VMs from the cluster:

Azure CLI

# Remove the VM from the cluster metadata

# example: az sql vm remove-from-group --name SQLVM2 --resource-group


SQLVM-RG

az sql vm remove-from-group --name <VM1 name> --resource-group


<resource group name>

az sql vm remove-from-group --name <VM2 name> --resource-group


<resource group name>

If these are the only VMs in the cluster, then the cluster will be destroyed. If there
are any other VMs in the cluster apart from the SQL Server VMs that were removed,
the other VMs will not be removed and the cluster will not be destroyed.

Next, remove the cluster metadata from the SQL IaaS Agent extension:

Azure CLI

# Remove the cluster from the SQL VM RP metadata

# example: az sql vm group delete --name Cluster --resource-group SQLVM-


RG

az sql vm group delete --name <cluster name> Cluster --resource-group


<resource group name>

Next steps
Once the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
Use Azure quickstart templates to
configure an availability group for SQL
Server on Azure VM
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article describes how to use the Azure quickstart templates to partially automate
the deployment of an Always On availability group configuration for SQL Server virtual
machines (VMs) within a single subnet in Azure. Two Azure quickstart templates are
used in this process:

Template Description

sql-vm- Creates the Windows failover cluster and joins the SQL Server VMs to it.
ag-
setup

sql-vm- Creates the availability group listener and configures the internal load balancer. This
aglistener- template can be used only if the Windows failover cluster was created with the 101-
setup sql-vm-ag-setup template.

Other parts of the availability group configuration must be done manually, such as
creating the availability group and creating the internal load balancer. This article
provides the sequence of automated and manual steps.

While this article uses the Azure Quickstart templates to configure the availability group
environment, it is also possible to do so using the Azure portal, PowerShell or the Azure
CLI, or Manually as well.

7 Note
It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs using Azure Migrate. See Migrate availability group to learn more.

Prerequisites
To automate the setup of an Always On availability group by using quickstart templates,
you must have the following prerequisites:

An Azure subscription .
A resource group with a domain controller.
One or more domain-joined VMs in Azure running SQL Server 2016 (or later)
Enterprise edition that are in the same availability set or availability zone and that
have been registered with the SQL IaaS Agent extension.
An internal Azure Load Balancer and an available (not used by any entity) IP
address for the availability group listener within the same subnet as the SQL Server
VM.

Permissions
The following permissions are necessary to configure the Always On availability group
by using Azure quickstart templates:

An existing domain user account that has Create Computer Object permission in
the domain. For example, a domain admin account typically has sufficient
permission (for example: account@domain.com). This account should also be part
of the local administrator group on each VM to create the cluster.
The domain user account that controls SQL Server.

Create cluster
After your SQL Server VMs have been registered with the SQL IaaS Agent extension, you
can join your SQL Server VMs to SqlVirtualMachineGroups. This resource defines the
metadata of the Windows failover cluster. Metadata includes the version, edition, fully
qualified domain name, Active Directory accounts to manage both the cluster and SQL
Server, and the storage account as the cloud witness.

Adding SQL Server VMs to the SqlVirtualMachineGroups resource group bootstraps the
Windows Failover Cluster Service to create the cluster and then joins those SQL Server
VMs to that cluster. This step is automated with the 101-sql-vm-ag-setup quickstart
template. You can implement it by using the following steps:
1. Go to the sql-vm-ag-setup quickstart template. Then, select Deploy to Azure to
open the quickstart template in the Azure portal.

2. Fill out the required fields to configure the metadata for the Windows failover
cluster. You can leave the optional fields blank.

The following table shows the necessary values for the template:

Field Value

Subscription The subscription where your SQL Server VMs exist.

Resource The resource group where your SQL Server VMs reside.
group

Failover The name that you want for your new Windows failover cluster.
Cluster
Name

Existing Vm The SQL Server VMs that you want to participate in the availability group
List and be part of this new cluster. Separate these values with a comma and a
space (for example: SQLVM1, SQLVM2).

SQL Server The SQL Server version of your SQL Server VMs. Select it from the drop-
Version down list. Currently, only SQL Server 2016 and SQL Server 2017 images are
supported.

Existing The existing FQDN for the domain in which your SQL Server VMs reside.
Fully
Qualified
Domain
Name

Existing An existing domain user account that has Create Computer Object
Domain permission in the domain as the CNO is created during template
Account deployment. For example, a domain admin account typically has sufficient
permission (for example: account@domain.com). This account should also be
part of the local administrator group on each VM to create the cluster.

Domain The password for the previously mentioned domain user account.
Account
Password

Existing Sql The domain user account that controls the SQL Server service during
Service availability group deployment (for example: account@domain.com).
Account

Sql Service The password used by the domain user account that controls SQL Server.
Password
Field Value

Cloud A new Azure storage account that will be created and used for the cloud
Witness witness. You can modify this name.
Name

_artifacts This field is set by default and should not be modified.


Location

_artifacts This field is intentionally left blank.


Location
SaS Token

3. If you agree to the terms and conditions, select the I Agree to the terms and
conditions stated above check box. Then select Purchase to finish deployment of
the quickstart template.

4. To monitor your deployment, either select the deployment from the Notifications
bell icon in the top navigation banner or go to Resource Group in the Azure portal.
Select Deployments under Settings, and choose the Microsoft.Template
deployment.

7 Note

Credentials provided during template deployment are stored only for the length of
the deployment. After deployment finishes, those passwords are removed. You'll be
asked to provide them again if you add more SQL Server VMs to the cluster.

Configure quorum
Although the disk witness is the most resilient quorum option, it requires an Azure
shared disk which imposes some limitations to the availability group. As such, the cloud
witness is the recommended quorum solution for clusters hosting availability groups for
SQL Server on Azure VMs.

If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.

Validate cluster
For a failover cluster to be supported by Microsoft, it must pass cluster validation.
Connect to the VM using your preferred method, such as Remote Desktop Protocol
(RDP) and validate that your cluster passes validation before proceeding further. Failure
to do so leaves your cluster in an unsupported state.

You can validate the cluster using Failover Cluster Manager (FCM) or the following
PowerShell command:

PowerShell

Test-Cluster –Node ("<node1>","<node2>") –Include "Inventory", "Network",


"System Configuration"

Create availability group


Manually create the availability group as you normally would, by using SQL Server
Management Studio, PowerShell, or Transact-SQL.

) Important

Do not create a listener at this time, because the 101-sql-vm-aglistener-setup


quickstart template does that automatically in step 4.

Create load balancer

7 Note

Availability group deployments to multiple subnets don't require a load balancer.


In a single-subnet environment, customers who use SQL Server 2019 CU8 and later
on Windows 2016 and later can replace the traditional virtual network name (VNN)
listener and Azure Load Balancer with a distributed network name (DNN) listener.
If you want to use a DNN, skip any tutorial steps that configure Azure Load
Balancer for your availability group.

The Always On availability group listener requires an internal instance of Azure Load
Balancer. The internal load balancer provides a "floating" IP address for the availability
group listener that allows for faster failover and reconnection. If the SQL Server VMs in
an availability group are part of the same availability set, you can use a Basic load
balancer. Otherwise, you need to use a Standard load balancer.

) Important
The internal load balancer should be in the same virtual network as the SQL Server
VM instances.

You just need to create the internal load balancer. In step 4, the 101-sql-vm-aglistener-
setup quickstart template handles the rest of the configuration (such as the backend
pool, health probe, and load-balancing rules).

1. In the Azure portal, open the resource group that contains the SQL Server virtual
machines.

2. In the resource group, select Add.

3. Search for load balancer. In the search results, select Load Balancer, which is
published by Microsoft.

4. On the Load Balancer blade, select Create.

5. In the Create load balancer dialog box, configure the load balancer as follows:

Setting Value

Name Enter a text name that represents the load balancer. For example, enter
sqlLB.

Type Internal: Most implementations use an internal load balancer, which allows
applications within the same virtual network to connect to the availability
group.
External: Allows applications to connect to the availability group through a
public internet connection.

Virtual Select the virtual network that the SQL Server instances are in.
network

Subnet Select the subnet that the SQL Server instances are in.

IP address Static
assignment

Private IP Specify an available IP address from the subnet.


address

Subscription If you have multiple subscriptions, this field might appear. Select the
subscription that you want to associate with this resource. It's normally the
same subscription as all the resources for the availability group.

Resource Select the resource group that the SQL Server instances are in.
group

Location Select the Azure location that the SQL Server instances are in.
6. Select Create.

) Important

The public IP resource for each SQL Server VM should have a Standard SKU to be
compatible with the Standard load balancer. To determine the SKU of your VM's
public IP resource, go to Resource Group, select your Public IP Address resource
for the SQL Server VM, and locate the value under SKU in the Overview pane.

Create listener
Create the availability group listener and configure the internal load balancer
automatically by using the 101-sql-vm-aglistener-setup quickstart template. The
template provisions the
Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups/AvailabilityGroupListener
resource. The 101-sql-vm-aglistener-setup quickstart template, via the SQL IaaS Agent
extension, does the following actions:

Creates a new frontend IP resource (based on the IP address value provided during
deployment) for the listener.
Configures the network settings for the cluster and the internal load balancer.
Configures the backend pool for the internal load balancer, the health probe, and
the load-balancing rules.
Creates the availability group listener with the given IP address and name.

7 Note

You can use 101-sql-vm-aglistener-setup only if the Windows failover cluster was
created with the 101-sql-vm-ag-setup template.

To configure the internal load balancer and create the availability group listener, do the
following:

1. Go to the sql-vm-aglistener-setup quickstart template and select Deploy to


Azure to start the quickstart template in the Azure portal.

2. Fill out the required fields to configure the internal load balancer, and create the
availability group listener. You can leave the optional fields blank.

The following table shows the necessary values for the template:
Field Value

Resource The resource group where your SQL Server VMs and availability group exist.
group

Existing The name of the cluster that your SQL Server VMs are joined to.
Failover
Cluster
Name

Existing The name of the availability group that your SQL Server VMs are a part of.
Sql
Availability
Group

Existing The names of the SQL Server VMs that are part of the previously mentioned
Vm List availability group. Separate the names with a comma and a space (for
example: SQLVM1, SQLVM2).

Listener The DNS name that you want to assign to the listener. By default, this
template specifies the name "aglistener," but you can change it. The name
should not exceed 15 characters.

Listener The port that you want the listener to use. Typically, this port should be the
Port default of 1433. This is the port number that the template specifies. But if your
default port has been changed, the listener port should use that value instead.

Listener IP The IP address that you want the listener to use. This address will be created
during template deployment, so provide one that isn't already in use.

Existing The name of the internal subnet of your SQL Server VMs (for example:
Subnet default). You can determine this value by going to Resource Group, selecting
your virtual network, selecting Subnets in the Settings pane, and copying the
value under Name.

Existing The name of the internal load balancer that you created in step 3.
Internal
Load
Balancer

Probe Port The probe port that you want the internal load balancer to use. The template
uses 59999 by default, but you can change this value.

3. If you agree to the terms and conditions, select the I Agree to the terms and
conditions stated above check box. Select Purchase to finish deployment of the
quickstart template.

4. To monitor your deployment, either select the deployment from the Notifications
bell icon in the top navigation banner or go to Resource Group in the Azure portal.
Select Deployments under Settings, and choose the Microsoft.Template
deployment.

7 Note

If your deployment fails halfway through, you'll need to manually remove the
newly created listener by using PowerShell before you redeploy the 101-sql-vm-
aglistener-setup quickstart template.

Remove listener
If you later need to remove the availability group listener that the template configured,
you must go through the SQL IaaS Agent extension. Because the listener is registered
through the SQL IaaS Agent extension, just deleting it via SQL Server Management
Studio is insufficient.

The best method is to delete it through the SQL IaaS Agent extension by using the
following code snippet in PowerShell. Doing so removes the availability group listener
metadata from the SQL IaaS Agent extension. It also physically deletes the listener from
the availability group.

PowerShell

# Remove the availability group listener

# example: Remove-AzResource -ResourceId '/subscriptions/a1a11a11-1a1a-aa11-


aa11-1aa1a11aa11a/resourceGroups/SQLAG-
RG/providers/Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups/Cluster/ava
ilabilitygrouplisteners/aglistener' -Force

Remove-AzResource -ResourceId
'/subscriptions/<SubscriptionID>/resourceGroups/<resource-group-
name>/providers/Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups/<cluster
-name>/availabilitygrouplisteners/<listener-name>' -Force

Common errors
This section discusses some known issues and their possible resolution.

Availability group listener for availability group '<AG-Name>' already exists


The
selected availability group used in the Azure quickstart template for the availability
group listener already contains a listener. Either it is physically within the availability
group, or its metadata remains within the SQL IaaS Agent extension. Remove the
listener by using PowerShell before redeploying the 101-sql-vm-aglistener-setup
quickstart template.

Connection only works from primary replica


This behavior is likely from a failed 101-
sql-vm-aglistener-setup template deployment that has left the configuration of the
internal load balancer in an inconsistent state. Verify that the backend pool lists the
availability set, and that rules exist for the health probe and for the load-balancing rules.
If anything is missing, the configuration of the internal load balancer is an inconsistent
state.

To resolve this behavior, remove the listener by using PowerShell, delete the internal
load balancer via the Azure portal, and start again at step 3.

BadRequest - Only SQL virtual machine list can be updated


This error might occur
when you're deploying the 101-sql-vm-aglistener-setup template if the listener was
deleted via SQL Server Management Studio (SSMS), but was not deleted from the SQL
IaaS Agent extension. Deleting the listener via SSMS does not remove the metadata of
the listener from the SQL IaaS Agent extension. The listener must be deleted from the
resource provider through PowerShell.

Domain account does not exist


This error can have two causes. Either the specified
domain account doesn't exist, or it's missing the User Principal Name (UPN) data. The
101-sql-vm-ag-setup template expects a domain account in the UPN form (that is,
user@domain.com), but some domain accounts might be missing it. This typically
happens when a local user has been migrated to be the first domain administrator
account when the server was promoted to a domain controller, or when a user was
created through PowerShell.

Verify that the account exists. If it does, you might be running into the second situation.
To resolve it, do the following:

1. On the domain controller, open the Active Directory Users and Computers
window from the Tools option in Server Manager.

2. Go to the account by selecting Users in the left pane.

3. Right-click the account, and select Properties.

4. Select the Account tab. If the User logon name box is blank, this is the cause of
your error.
5. Fill in the User logon name box to match the name of the user, and select the
proper domain from the drop-down list.

6. Select Apply to save your changes, and close the dialog box by selecting OK.

After you make these changes, try to deploy the Azure quickstart template once more.

Next steps
To learn more, see:

Overview of SQL Server VMs


FAQ for SQL Server VMs
Pricing guidance for SQL Server VMs
What's new in SQL Server on Azure VMs
Switching licensing models for a SQL Server VM
Configure an availability group across
Azure regions - SQL Server on Azure
VMs
Article • 04/20/2023

Applies to:
SQL Server on Azure VM

This tutorial explains how to configure an Always On availability group replica for SQL
Server on Azure virtual machines (VMs) in an Azure region that is remote to the primary
replica. You can use this configuration for the purpose of disaster recovery (DR).

You can also use the steps in this article to extend an existing on-premises availability
group to Azure.

This tutorial builds on the tutorial to manually deploy an availability group in a single
subnet in a single region. Mentions of the local region in this article refer to the virtual
machines and availability group already configured in the first region. The remote
region is the new infrastructure that's being added in this tutorial.

Overview
The following image shows a common deployment of an availability group on Azure
virtual machines:
In the deployment shown in the diagram, all virtual machines are in one Azure region.
The availability group replicas can have synchronous commit with automatic failover on
SQL-1 and SQL-2. To build this architecture, see the availability group template or
tutorial.

This architecture is vulnerable to downtime if the Azure region becomes inaccessible. To


overcome this vulnerability, add a replica in a different Azure region. The following
diagram shows how the new architecture looks:

The diagram shows a new virtual machine called SQL-3. SQL-3 is in a different Azure
region. It's added to the Windows Server failover cluster and can host an availability
group replica.

The Azure region for SQL-3 has a new Azure load balancer. In this architecture, the
replica in the remote region is normally configured with asynchronous commit
availability mode and manual failover mode.

7 Note

An Azure availability set is required when more than one virtual machine is in the
same region. If only one virtual machine is in the region, the availability set is not
required.

You can place a virtual machine in an availability set only at creation time. If the
virtual machine is already in an availability set, you can add a virtual machine for an
additional replica later.
When availability group replicas are on Azure virtual machines in different Azure
regions, you can connect the virtual networks by using virtual network peering or a site-
to-site VPN gateway.

) Important

This architecture incurs outbound data charges for data replicated between Azure
regions. See Bandwidth pricing .

Create the network and subnet


Before you create a virtual network and subnet in a new region, decide on the address
space, subnet network, cluster IP, and availability group listener IP addresses that you'll
use for the remote region.

The following table lists details for the local (current) region and what will be set up in
the new remote region.

Type Local Remote region

Address space 192.168.0.0/16 10.36.0.0/16

Subnet network 192.168.15.0/24 10.36.1.0/24

Cluster IP 192.168.15.200 10.36.1.200

Availability group listener IP 192.168.15.201 10.36.1.201

To create a virtual network and subnet in the new region in the Azure portal:

1. Go to your resource group in the Azure portal and select + Create.

2. Search for virtual network in the Marketplace search box, and then select the
virtual network tile from Microsoft.

3. On the Create virtual network page, select Create. Then enter the following
information on the Basics tab:
a. Under Project details, for Subscription, select the appropriate Azure
subscription. For Resource group, select the resource group that you created
previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
remote_HAVNET. Then choose a new remote region.
4. On the IP addresses tab, select the ellipsis (...) next to + Add a subnet. Select
Delete address space to remove the existing address space, if you need a different
address range.

5. Select Add an IP address space to open the pane to create the address space that
you need. This tutorial uses the address space of the remote region: 10.36.0.0/16.
Select Add.

6. Select + Add a subnet, and then:

a. Provide a value for the Subnet name, such as admin.

b. Provide a unique subnet address range within the virtual network address
space.

For example, if your address range is 10.36.0.0/16, enter these values for the
admin subnet: 10.36.1.0 for Starting address and /24 for Subnet size.

c. Select Add to add your new subnet.


Connect the virtual networks in the two Azure
regions
After you create the new virtual network and subnet, you're ready to connect the two
regions so they can communicate with each other. There are two methods to do this:

Connect virtual networks with virtual network peering by using the Azure portal
(recommended)

In some cases, you might have to use PowerShell to create the connection
between virtual networks. For example, if you use different Azure accounts, you
can't configure the connection in the portal. In this case, review Configure a
network-to-network connection by using the Azure portal.

Configure a site-to-site VPN gateway connection by using the Azure portal

This tutorial uses virtual network peering. To configure virtual network peering:

1. In the search box at the top of the Azure portal, type autoHAVNET, which is the
virtual network in your local region. When autoHAVNET appears in the search
results, select it.

2. Under Settings, select Peerings, and then select + Add.


3. Enter or select the following information, accept the defaults for the remaining
settings, and then select Add.

Setting Value

This virtual
network

Peering link Enter autoHAVNET-remote_HAVNET for the name of the peering from
name autoHAVNET to the remote virtual network.

Remote
virtual
network

Peering link Enter remote_HAVNET-autoHAVNET for the name of the peering from the
name remote virtual network to autoHAVNET.

Subscription Select your subscription for the remote virtual network.

Virtual Select remote_HAVNET for the name of the remote virtual network. The
network remote virtual network can be in the same region of autoHAVNET or in a
different region.

4. On the Peerings page, Peering status is Connected.


If you don't see a Connected status, select the Refresh button.

Create a domain controller


A domain controller in the new region is necessary to provide authentication if the
primary site is not available. To create the domain controller in the new region:

1. Return to the SQL-HA-RG resource group.


2. Select + Create.
3. Type Windows Server 2016 Datacenter, and then select the Windows Server 2016
Datacenter result.
4. In Windows Server 2016 Datacenter, verify that the deployment model is Resource
Manager, and then select Create.

The following table shows the settings for the two machines:

Setting Value

Name Remote domain controller: ad-remote-dc

VM disk type SSD

User name DomainAdmin

Password Contoso!0000

Subscription Your subscription

Resource group SQL-HA-RG

Location Your location

Size DS1_V2

Storage Use managed disks: Yes

Virtual network remote_HAVNET

Subnet admin
Setting Value

Public IP address Same name as the VM

Network security group Same name as the VM

Diagnostics Enabled

Diagnostics storage account Automatically created

Azure creates the virtual machines.

Configure the domain controller


In the following steps, configure the ad-remote-dc machine as a domain controller for
corp.contoso.com:

Set preferred DNS server address

The preferred DNS server address should not be updated directly within a VM, it should
be edited from the Azure portal, or PowerShell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:

1. Sign-in to the Azure portal .

2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.

3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.

4. In Settings, select DNS servers.

5. Since this domain controller is not in the same virtual network as the primary
domain controller select Custom and input the IP address of the primary domain
controller, such as 192.168.15.4 . The DNS server address you specify is assigned
only to this network interface and overrides any DNS setting for the virtual network
the network interface is assigned to.

6. Select Save.

7. Return to the virtual machine in the Azure portal and restart the VM. Once the
virtual machine has restarted, you can join the VM to the domain.
Join the domain
Next, join the corp.contoso.com domain. To do so, follow these steps:

1. Remotely connect to the virtual machine using the BUILTIN\DomainAdmin


account.
2. Open Server Manager, and select Local Server.
3. Select WORKGROUP.
4. In the Computer Name section, select Change.
5. Select the Domain checkbox and type corp.contoso.com in the text box. Select
OK.
6. In the Windows Security popup dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the popup dialog.

Configure domain controller

Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:

1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).

2. Select the Add roles and features link on the dashboard.

3. Select Next until you get to the Server Roles section.


4. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.

5. After the features finish installing, return to the Server Manager dashboard.

6. Select the new AD DS option on the left-hand pane.

7. Select the More link on the yellow warning bar.

8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.

9. Under Deployment Configuration, select Add a domain controller to an existing


domain.

10. Click Select.

11. Connect by using the administrator account


(CORP.CONTOSO.COM\domainadmin) and password (Contoso!0000).

12. In Select a domain from the forest, choose your domain and then select OK.

13. In Domain Controller Options, use the default values and set a DSRM password.

7 Note

The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.

14. Select Next until the dialog reaches the Prerequisites check. Then select Install.

After the server finishes the configuration changes, restart the server.

Create a SQL Server VM


After the domain controller restarts, the next step is to create a SQL Server virtual
machine in the new region.

Before you proceed, consider the following design decisions:

Storage: Azure managed disks

For the virtual machine storage, use Azure managed disks. We recommend
managed disks for SQL Server virtual machines. Managed disks handle storage
behind the scenes. In addition, when virtual machines with managed disks are in
the same availability set, Azure distributes the storage resources to provide
appropriate redundancy.

For more information, see Introduction to Azure managed disks. For specifics
about managed disks in an availability set, see Use managed disks for VMs in an
availability set.

Network: Private IP addresses in production

For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to the virtual machine over the internet and
makes configuration steps easier. In production environments, we recommend
only private IP addresses. Private IP addresses reduce the vulnerability footprint of
the SQL Server VM.

Network: Single NIC per server

Use a single network interface card (NIC) per server (cluster node) and a single
subnet. Azure networking has physical redundancy, which makes additional NICs
and subnets unnecessary on an Azure VM guest cluster. The cluster validation
report will warn you that the nodes are reachable on only a single network. You
can ignore this warning on Azure VM guest failover clusters.

Create and configure the SQL Server VM


To create the SQL Server VM, go back to the SQL-HA-RG resource group, and then
select Add. Search for the appropriate gallery item, select Virtual Machine, and then
select From Gallery. Use the information in the following table to help you create the
VMs:

Page Setting

Select the appropriate gallery item SQL Server 2016 SP1 Enterprise on Windows Server 2016
Page Setting

Virtual machine configuration: Name = sqlserver-2

Basics
User Name = DomainAdmin

Password = Contoso!0000

Subscription = Your subscription

Resource group = SQL-HA-RG

Location = Your remote region

Virtual machine configuration: Size Size = DS2_V2 (2 vCPUs, 7 GB)

The size must support SSD storage (premium disk


support).

Virtual machine configuration: Storage: Use managed disks

Settings
Virtual network = remote-HAVNET

Subnet = admin (10.36.1.0/24)

Public IP address = Automatically generated

Network security group = None

Monitoring Diagnostics = Enabled

Diagnostics storage account = Use an automatically


generated storage account

Virtual machine configuration: SQL SQL connectivity = Private (within Virtual Network)

Server settings
Port = 1433

SQL Authentication = Disabled

Storage configuration = General

Automated patching = Sunday at 2:00

Automated backup = Disabled

Azure Key Vault integration = Disabled


7 Note

The machine size suggested here is meant for testing availability groups in Azure
virtual machines. For the best performance on production workloads, see the
recommendations for SQL Server machine sizes and configuration in Checklist: Best
practices for SQL Server on Azure VMs.

After the VM is fully provisioned, you need to join it to the corp.contoso.com domain
and grant CORP\Install administrative rights to the machines.

Join the server to the domain


To join the VM to corp.contoso.com, use the following steps for the SQL Server VM:

1. Remotely connect to the virtual machine by using BUILTIN\DomainAdmin.


2. In Server Manager, select Local Server.
3. Select the WORKGROUP link.
4. In the Computer Name section, select Change.
5. Select the Domain check box, and enter corp.contoso.com in the text box. Then
select OK.
6. In the Windows Security pop-up dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the pop-up dialog.

Add accounts
The next task is to add the installation account as an administrator on the SQL Server
VM, and then grant permission to that account and to local accounts within SQL Server.
You can then update the SQL Server service account.

Add the CORP\Install user as an administrator on each


cluster VM
After the SQL Server virtual machine restarts as a member of the domain, add
CORP\Install as a member of the local administrators group:
1. Wait until the VM is restarted, and then open the RDP file again from the primary
domain controller. Sign in to sqlserver-2 by using the CORP\DomainAdmin
account.

 Tip

In earlier steps, you were using the BUILTIN administrator account. Now that
the server is in the domain, make sure that you sign in with the domain
administrator account. In your RDP session, specify DOMAIN\username.

2. In Server Manager, select Tools, and then select Computer Management.

3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.

4. Double-click the Administrators group.

5. In the Administrator Properties dialog, select the Add button.

6. Enter the user as CORP\Install, and then select OK.

7. Select OK to close the Administrator Properties dialog.

Create a sign-in on each SQL Server VM for the


installation account
Use the installation account (CORP\Install) to configure the availability group. This
account needs to be a member of the sysadmin fixed server role on each SQL Server
VM. The following steps create a sign-in for the installation account. Complete them on
both SQL Server VMs.

1. Connect to the server through RDP by using the <MachineName>\DomainAdmin


account.

2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.

3. In Object Explorer, select Security.

4. Right-click Logins. Select New Login.

5. In Login - New, select Search.

6. Select Locations.
7. Enter the domain administrator's network credentials. Use the installation account
(CORP\Install).

8. Set the sign-in to be a member of the sysadmin fixed server role.

9. Select OK.

Configure system account permissions


To create a system account and grant appropriate permissions, complete the following
steps on each SQL Server instance:

1. Use the following script to create an account for [NT AUTHORITY\SYSTEM] :

SQL

USE [master]

GO

CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE=


[master]

GO

2. Grant the following permissions to [NT AUTHORITY\SYSTEM] :

ALTER ANY AVAILABILITY GROUP


CONNECT SQL

VIEW SERVER STATE

The following script grants these permissions:

SQL

GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]

GO

GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]

GO

GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]

GO

Set the SQL Server service accounts


On each SQL Server VM, complete the following steps to set the SQL Server service
account. Use the accounts that you created when you configured the domain accounts.

1. Open SQL Server Configuration Manager.


2. Right-click the SQL Server service, and then select Properties.
3. Set the account and password.

For SQL Server availability groups, each SQL Server VM needs to run as a domain
account.

Create an Azure load balancer


A load balancer is required in the remote region to support the SQL Server availability
group. The load balancer holds the IP addresses for the availability group listeners and
the Windows Server failover cluster. This section summarizes how to create the load
balancer in the Azure portal.

The load balancer must:

Be in the same network and subnet as the new virtual machine.


Have a static IP address for the availability group listener.
Include a backend pool that consists of only the virtual machines in the same
region as the load balancer.
Use a TCP port probe that's specific to the IP address.
Have a load-balancing rule that's specific to the SQL Server instance in the same
region.
Be a standard load balancer if the virtual machines in the backend pool aren't part
of either a single availability set or a virtual machine scale set. For more
information, review What is Azure Load Balancer?.
Be a standard load balancer if the two virtual networks in two different regions are
peered over global virtual network peering. For more information, see Azure
Virtual Network frequently asked questions (FAQ).

The steps to create the load balancer are:

1. In the Azure portal, go to the resource group where your SQL Server instance is,
and then select + Add.

2. Search for Load Balancer. Choose the load balancer that Microsoft publishes.

3. Select Create.

4. Configure the following parameters for the load balancer:

Setting Value

Subscription Use the same subscription as the virtual machine.


Setting Value

Resource group Use the same resource group as the virtual machine.

Name Use a text name for the load balancer (for example, remoteLB).

Region Use the same region as the virtual machine.

SKU Select Standard.

Type Select Internal.

The Azure portal pane should look like this:

5. Select Next: Frontend IP Configuration.

6. Select Add a frontend IP configuration.


7. Set up the frontend IP address by using the following values:

Name: Use a name that identifies the frontend IP configuration.


Virtual network: Use the same network as the virtual machines.
Subnet: Use the same subnet as the virtual machines.
Assignment: Select Static.
IP address: Use an available address from the subnet. Use this address for
your availability group listener. This address is different from your cluster IP
address.
Availability zone: Optionally, choose an availability zone to deploy your IP
address to.
8. Select Add.

9. Select Review + Create to validate the configuration, and then select Create to
create the load balancer and the frontend IP address.

To configure the load balancer, you need to create a backend pool, create a probe, and
set the load-balancing rules.

Add a backend pool for the availability group listener


1. In the Azure portal, go to your availability group. You might need to refresh the
view to see the newly created load balancer.

2. Select the load balancer, select Backend pools, and then select +Add.

3. For Name, provide a name for the backend pool.

4. For Backend Pool Configuration, select NIC.

5. Select Add to associate the backend pool with the newly created SQL Server VM.
6. Under Virtual machine, choose the virtual machine that will host the availability
group replica.

7. Select Add to add the virtual machine to the backend pool.

8. Select Save.

Set the probe


1. In the Azure portal, select the load balancer, select Health probes, and then select
+Add.

2. Set the listener health probe as follows:

Setting Description Example

Name Text SQLAlwaysOnEndPointProbe

Protocol Choose TCP TCP

Port Any unused port 59999

Interval The amount of time between probe attempts, in 5


seconds

3. Select Add.

Set the load-balancing rules


1. In the Azure portal, select the load balancer, select Load balancing rules, and then
select +Add.

2. Set the listener load-balancing rules as follows:

Setting Description Example

Name Text SQLAlwaysOnEndPointListener

Frontend IP Choose an address Use the address that you created when
address you created the load balancer.

Backend pool Choose the backend pool Select the backend pool that contains
the virtual machines targeted for the
load balancer.

Protocol Choose TCP TCP


Setting Description Example

Port Use the port for the availability 1433


group listener

Backend Port This field is not used when you 1433


set a floating IP for direct
server return

Health Probe The name that you specified for SQLAlwaysOnEndPointProbe


the probe

Session Dropdown list None


Persistence

Idle Timeout Minutes to keep a TCP 4


connection open

Floating IP Enable this setting.


(direct server
return)

2 Warning

Direct server return is set during creation. You can't change it.

3. Select Save.

Add failover clustering to SQL Server VMs


To add failover clustering features, complete the following steps on both SQL Server
VMs:

1. Connect to the SQL Server virtual machine through RDP by using the CORP\Install
account. Open the Server Manager dashboard.

2. Select the Add roles and features link on the dashboard.


3. Select Next until you get to the Server Features section.

4. In Features, select Failover Clustering.

5. Add any required features.

6. Select Install.

7 Note

You can now automate this task, along with actually joining the SQL Server VMs to
the failover cluster, by using the Azure CLI and Azure quickstart templates.

Tune network thresholds for a failover cluster


When you're running Windows failover cluster nodes in Azure VMs with SQL Server
availability groups, change the cluster setting to a more relaxed monitoring state. This
change will make the cluster more stable and reliable. For details, see IaaS with SQL
Server: Tuning failover cluster network thresholds.

Configure the firewall on each SQL Server VM


The solution requires the following TCP ports to be open in the firewall:

SQL Server VM: Port 1433 for a default instance of SQL Server.
Azure load balancer probe: Any available port. Examples frequently use 59999.
Cluster core load balancer IP address health probe: Any available port. Examples
frequently use 58888.
Database mirroring endpoint: Any available port. Examples frequently use 5022.

The firewall ports need to be open on the new SQL Server VM. The method of opening
the ports depends on the firewall solution that you use. The following steps show how
to open the ports in Windows Firewall:

1. On the SQL Server Start screen, open Windows Firewall with Advanced Security.

2. On the left pane, select Inbound Rules. On the right pane, select New Rule.

3. For Rule Type, select Port.

4. For the port, specify TCP and enter the appropriate port numbers. The following
screenshot shows an example:

5. Select Next.

6. On the Action page, keep Allow the connection selected and select Next.

7. On the Profile page, accept the default settings and select Next.

8. On the Name page, specify a rule name (such as Azure LB Probe) in the Name
box, and then select Finish.
Add SQL Server to the Windows Server failover
cluster
The new SQL Server VM needs to be added to the Windows Server failover cluster that
exists in your local region.

To add the SQL Server VM to the cluster:

1. Use RDP to connect to a SQL Server VM in the existing cluster. Use a domain
account that's an administrator on both SQL Server VMs and the witness server.

2. On the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.

3. On the left pane, right-click Failover Cluster Manager, and then select Connect to
Cluster.

4. In the Select Cluster window, under Cluster name, choose <Cluster on this
server>. Then select OK.

5. In the browser tree, right-click the cluster and select Add Node.

6. In the Add Node Wizard, select Next.

7. On the Select Servers page, add the name of the new SQL Server instance. Enter
the server name in Enter server name, select Add, and then select Next.

8. On the Validation Warning page, select No. (In a production scenario, you should
perform the validation tests). Then, select Next.

9. On the Confirmation page, if you're using Storage Spaces, clear the Add all
eligible storage to the cluster checkbox.

2 Warning

If you don't clear Add all eligible storage to the cluster, Windows detaches
the virtual disks during the clustering process. As a result, they don't appear in
Disk Manager or Explorer until the storage is removed from the cluster and
reattached via PowerShell.

10. Select Next.

11. Select Finish.


Failover Cluster Manager shows that your cluster has a new node and lists it in the
Nodes container.

Add the IP address for the Windows Server failover


cluster

7 Note

On Windows Server 2019, the cluster creates a distributed server name instead of a
cluster network name. If you're using Windows Server 2019, skip to Add an IP
address for the availability group listener. You can create a cluster network name
by using PowerShell. For more information, review the blog post Failover Cluster:
Cluster Network Object .

Next, create the IP address resource and add it to the cluster for the new SQL Server VM:

1. In Failover Cluster Manager, select the name of the cluster. Right-click the cluster
name under Cluster Core Resources, and then select Properties:

2. In the Cluster Properties dialog, select Add under IP Addresses, and then add the
IP address of the cluster name from the remote network region. Select OK in the IP
Address dialog, and then select OK in the Cluster Properties dialog to save the
new IP address.

3. Add the IP address as a dependency for the cluster core name.

Open the Cluster Properties dialog once more, and select the Dependencies tab.
Configure an OR dependency for the two IP addresses.
Add an IP address for the availability group listener
The IP address for the listener in the remote region needs to be added to the cluster. To
add the IP address:

1. In Failover Cluster Manager, right-click the availability group role. Point to Add
Resource, point to More Resources, and then select IP Address.

2. To configure this IP address, right-click the resource under Other Resources, and
then select Properties.

3. For Name, enter a name for the new resource. For Network, select the network
from the remote datacenter. Select Static IP Address, and then in the Address box,
assign the static IP address from the new Azure load balancer.
4. Select Apply, and then select OK.

5. Add the IP address resource as a dependency for the listener client access point
(network name) cluster.

Right-click the listener client access point, and then select Properties. Browse to
the Dependencies tab and add the new IP address resource to the listener client
access point. The following screenshot shows a properly configured IP address
cluster resource:

) Important

The cluster resource group includes both IP addresses. Both IP addresses are
dependencies for the listener client access point. Use the OR operator in the
cluster dependency configuration.

6. Set the cluster parameters in PowerShell.

Run the PowerShell script with the cluster network name, IP address, and probe
port that you configured on the load balancer in the new region:

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster name for


the network in the new region (Use Get-ClusterNetwork on Windows Server
2012 or later to find the name.)

$IPResourceName = "<IPResourceName>" # The cluster name for the new IP


address resource.

$ILBIP = "<n.n.n.n>" # The IP address of the internal load balancer in


the new region. This is the static IP address for the load balancer
that you configured in the Azure portal.

[int]$ProbePort = <nnnnn> # The probe port that you set on the internal
load balancer.

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ILBIP";"ProbePort"=$ProbePort;"SubnetMask"="255.255.255.2
55";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

Enable availability groups


Next, enable the Always On availability groups feature. Complete these steps on the new
SQL Server VM:

1. From the Start screen, open SQL Server Configuration Manager.

2. In the browser tree, select SQL Server Services. Right-click the SQL Server
(MSSQLSERVER) service, and then select Properties.

3. Select the AlwaysOn High Availability tab, and then select Enable AlwaysOn
Availability Groups.
4. Select Apply. Select OK in the pop-up dialog.

5. Restart the SQL Server service.

Add a replica to the availability group


After SQL Server has restarted on the newly created virtual machine, you can add it as a
replica to the availability group:

1. Open a remote desktop session to the primary SQL Server instance in the
availability group, and then open SQL Server Management Studio (SSMS).

2. In Object Explorer in SSMS, open Always On High Availability > Availability


Groups. Right-click your availability group name, and then select Add Replica.

3. Connect to the existing replica, and then select Next.

4. Select Add Replica and connect to the new SQL Server VM.

) Important
A replica in a remote Azure region should be set to asynchronous replication
with manual failover.

5. On the Select Initial Data Synchronization page, select Full and specify a shared
network location. For the location, use the backup share that you created. In the
example, it was \\<First SQL Server>\Backup\. Then select Next.

7 Note

Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, we
don't recommend full synchronization because it might take a long time.

You can reduce this time by manually backing up the database and restoring
it with NO RECOVERY . If the database is already restored with NO RECOVERY on
the second SQL Server instance before you configure the availability group,
select Join only. If you want to take the backup after you configure the
availability group, select Skip initial data synchronization.

6. On the Validation page, select Next. This page should look similar to the following
image:
7 Note

A warning for the listener configuration says you haven't configured an


availability group listener. You can ignore this warning because the listener is
already set up. It was created after you created the Azure load balancer in the
local region.

7. On the Summary page, select Finish, and then wait while the wizard configures the
new availability group. On the Progress page, you can select More details to view
the detailed progress.

After the wizard finishes the configuration, inspect the Results page to verify that
the availability group is successfully created.

8. Select Close to close the wizard.

Check the availability group


In Object Explorer, expand Always On High Availability, and then expand Availability
Groups. Right-click the availability group and select Show Dashboard.
Your availability group dashboard should look similar to the following screenshot, now
with another replica:

The dashboard shows the replicas, the failover mode of each replica, and the
synchronization state.

Check the availability group listener


1. In Object Explorer, expand Always On High Availability, expand Availability
Groups, and then expand Availability Group Listener.

2. Right-click the listener name and select Properties. Both IP addresses should now
appear for the listener (one in each region).

Set the connection for multiple subnets


The replica in the remote datacenter is part of the availability group, but it's in a
different subnet. If this replica becomes the primary replica, application connection
time-outs might occur. This behavior is the same as an on-premises availability group in
a multiple-subnet deployment. To allow connections from client applications, either
update the client connection or configure name resolution caching on the cluster
network name resource.

Preferably, update the cluster configuration to set RegisterAllProvidersIP=1 and the


client connection strings to set MultiSubnetFailover=Yes . See Connecting with
MultiSubnetFailover.

If you can't modify the connection strings, you can configure name resolution caching.
See Timeout occurs when you connect to an Always On listener in a multi-subnet
environment .

Fail over to the remote region


To test listener connectivity to the remote region, you can fail the replica over to the
remote region. While the replica is asynchronous, failover is vulnerable to potential data
loss. To fail over without data loss, change the availability mode to synchronous and set
the failover mode to automatic. Use the following steps:

1. In Object Explorer, connect to the instance of SQL Server that hosts the primary
replica.

2. Under Always On Availability Groups, right-click your availability group and select
Properties.

3. On the General page, under Availability Replicas, set the secondary replica on the
disaster recovery (DR) site to use Synchronous Commit availability mode and
Automatic failover mode.

If you have a secondary replica in same site as your primary replica for high
availability, set this replica to Asynchronous Commit and Manual.

4. Select OK.

5. In Object Explorer, right-click the availability group and select Show Dashboard.

6. On the dashboard, verify that the replica on the DR site is synchronized.

7. In Object Explorer, right-click the availability group and select Failover. SQL Server
Management Studio opens a wizard to fail over SQL Server.

8. Select Next, and select the SQL Server instance on the DR site. Select Next again.

9. Connect to the SQL Server instance on the DR site, and then select Next.

10. On the Summary page, verify the settings and select Finish.

After you test connectivity, move the primary replica back to your primary datacenter
and set the availability mode back to its normal operating settings. The following table
shows the normal operating settings for the architecture described in this article:

Location Server Role Availability Failover


instance mode mode

Primary datacenter SQL-1 Primary Synchronous Automatic

Primary datacenter SQL-2 Secondary Synchronous Automatic

Secondary or remote SQL-3 Secondary Asynchronous Manual


datacenter
For more information about planned and forced manual failover, see the following
articles:

Perform a planned manual failover of an availability group (SQL Server)


Perform a forced manual failover of an availability group (SQL Server)

Next steps
To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Configure a workgroup availability
group
Article • 09/30/2022

Applies to:
SQL Server on Azure VM

This article explains the steps necessary to create an Active Directory domain-
independent cluster with an Always On availability group; this is also known as a
workgroup cluster. This article focuses on the steps that are relevant to preparing and
configuring the workgroup and availability group, and glosses over steps that are
covered in other articles, such as how to create the cluster, or deploy the availability
group.

Prerequisites
To configure a workgroup availability group, you need the following:

At least two Windows Server 2016 (or higher) virtual machines running SQL Server
2016 (or higher), deployed to the same availability set, or different availability
zones, using static IP addresses.
A local network with a minimum of 4 free IP addresses on the subnet.
An account on each machine in the administrator group that also has sysadmin
rights within SQL Server.
Open ports: TCP 1433, TCP 5022, TCP 59999.

For reference, the following parameters are used in this article, but can be modified as is
necessary:

Name Parameter

Node1 AGNode1 (10.0.0.4)

Node2 AGNode2 (10.0.0.5)

Cluster name AGWGAG (10.0.0.6)

Listener AGListener (10.0.0.7)

DNS suffix ag.wgcluster.example.com

Work group name AGWorkgroup


Set a DNS suffix
In this step, configure the DNS suffix for both servers. For example,
ag.wgcluster.example.com . This allows you to use the name of the object you want to

connect to as a fully qualified address within your network, such as


AGNode1.ag.wgcluster.example.com .

To configure the DNS suffix, follow these steps:

1. RDP in to your first node and open Server Manager.

2. Select Local Server and then select the name of your virtual machine under
Computer name.

3. Select Change... under To rename this computer....

4. Change the name of the workgroup name to be something meaningful, such as


AGWORKGROUP :

5. Select More... to open the DNS Suffix and NetBIOS Computer Name dialog box.

6. Type the name of your DNS suffix under Primary DNS suffix of this computer,
such as ag.wgcluster.example.com and then select OK:
7. Confirm that the Full computer name is now showing the DNS suffix, and then
select OK to save your changes:

8. Reboot the server when you are prompted to do so.

9. Repeat these steps on any other nodes to be used for the availability group.

Edit a host file


Since there is no active directory, there is no way to authenticate Windows connections.
As such, assign trust by editing the host file with a text editor.

To edit the host file, follow these steps:

1. RDP in to your virtual machine.


2. Use File Explorer to go to c:\windows\system32\drivers\etc .

3. Right-click the hosts file and open the file with Notepad (or any other text editor).

4. At the end of the file, add an entry for each node, the availability group, and the
listener in the form of IP Address, DNS Suffix #comment like:

10.0.0.4 AGNode1.ag.wgcluster.example.com #Availability group node

10.0.0.5 AGNode2.ag.wgcluster.example.com #Availability group node

10.0.0.6 AGWGAG.ag.wgcluster.example.com #Cluster IP

10.0.0.7 AGListener.ag.wgcluster.example.com #Listener IP

Set permissions
Since there is no Active Directory to manage permissions, you need to manually allow a
non-builtin local administrator account to create the cluster.

To do so, run the following PowerShell cmdlet in an administrative PowerShell session


on every node:

PowerShell

new-itemproperty -path
HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name
LocalAccountTokenFilterPolicy -Value 1

Create the failover cluster


In this step, you will create the failover cluster. If you're unfamiliar with these steps, you
can follow them from the failover cluster tutorial.

Notable differences between the tutorial and what should be done for a workgroup
cluster:

Uncheck Storage, and Storage Spaces Direct when running the cluster validation.
When adding the nodes to the cluster, add the fully qualified name, such as:
AGNode1.ag.wgcluster.example.com
AGNode2.ag.wgcluster.example.com

Uncheck Add all eligible storage to the cluster.

Once the cluster has been created, assign a static Cluster IP address. To do so, follow
these steps:

1. On one of the nodes, open Failover Cluster Manager, select the cluster, right-click
the Name: <ClusterNam> under Cluster Core Resources and then select
Properties.

2. Select the IP address under IP Addresses and select Edit.


3. Select Use Static, provide the IP address of the cluster, and then select OK:

4. Verify that your settings look correct, and then select OK to save them:
Create a cloud witness
In this step, configure a cloud share witness. If you're unfamiliar with the steps, see
Deploy a Cloud Witness for a Failover Cluster.

Enable the availability group feature


In this step, enable the availability group feature. If you're unfamiliar with the steps, see
the availability group tutorial.

Create keys and certificates


In this step, create certificates that a SQL login uses on the encrypted endpoint. Create a
folder on each node to hold the certificate backups, such as c:\certs .

To configure the first node, follow these steps:

1. Open SQL Server Management Studio and connect to your first node, such as
AGNode1 .

2. Open a New Query window and run the following Transact-SQL (T-SQL) statement
after updating to a complex and secure password:

SQL

USE master;

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'PassWOrd123!';

GO

--create a cert from the master key

USE master;

CREATE CERTIFICATE AGNode1Cert

WITH SUBJECT = 'AGNode1 Certificate';

GO

--Backup the cert and transfer it to AGNode2

BACKUP CERTIFICATE AGNode1Cert TO FILE = 'C:\certs\AGNode1Cert.crt';

GO

3. Next, create the HADR endpoint, and use the certificate for authentication by
running this Transact-SQL (T-SQL) statement:

SQL

--CREATE or ALTER the mirroring endpoint

CREATE ENDPOINT hadr_endpoint

STATE = STARTED

AS TCP (

LISTENER_PORT=5022

, LISTENER_IP = ALL

FOR DATABASE_MIRRORING (

AUTHENTICATION = CERTIFICATE AGNode1Cert

, ENCRYPTION = REQUIRED ALGORITHM AES

, ROLE = ALL

);

GO

4. Use File Explorer to go to the file location where your certificate is, such as
c:\certs .
5. Manually make a copy of the certificate, such as AGNode1Cert.crt , from the first
node, and transfer it to the same location on the second node.

To configure the second node, follow these steps:

1. Connect to the second node with SQL Server Management Studio, such as
AGNode2 .

2. In a New Query window, run the following Transact-SQL (T-SQL) statement after
updating to a complex and secure password:

SQL

USE master;

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'PassWOrd123!';

GO

--create a cert from the master key

USE master;

CREATE CERTIFICATE AGNode2Cert

WITH SUBJECT = 'AGNode2 Certificate';

GO

--Backup the cert and transfer it to AGNode1

BACKUP CERTIFICATE AGNode2Cert TO FILE = 'C:\certs\AGNode2Cert.crt';

GO

3. Next, create the HADR endpoint, and use the certificate for authentication by
running this Transact-SQL (T-SQL) statement:

SQL

--CREATE or ALTER the mirroring endpoint

CREATE ENDPOINT hadr_endpoint

STATE = STARTED

AS TCP (

LISTENER_PORT=5022

, LISTENER_IP = ALL

FOR DATABASE_MIRRORING (

AUTHENTICATION = CERTIFICATE AGNode2Cert

, ENCRYPTION = REQUIRED ALGORITHM AES

, ROLE = ALL

);

GO

4. Use File Explorer to go to the file location where your certificate is, such as
c:\certs .
5. Manually make a copy of the certificate, such as AGNode2Cert.crt , from the second
node, and transfer it to the same location on the first node.

If there are any other nodes in the cluster, repeat these steps there also, modifying the
respective certificate names.

Create logins
Certificate authentication is used to synchronize data across nodes. To allow this, create
a login for the other node, create a user for the login, create a certificate for the login to
use the backed-up certificate, and then grant connect on the mirroring endpoint.

To do so, first run the following Transact-SQL (T-SQL) query on the first node, such as
AGNode1 :

SQL

--create a login for the AGNode2

USE master;

CREATE LOGIN AGNode2_Login WITH PASSWORD = 'PassWord123!';

GO

--create a user from the login

CREATE USER AGNode2_User FOR LOGIN AGNode2_Login;

GO

--create a certificate that the login uses for authentication

CREATE CERTIFICATE AGNode2Cert

AUTHORIZATION AGNode2_User

FROM FILE = 'C:\certs\AGNode2Cert.crt'

GO

--grant connect for login

GRANT CONNECT ON ENDPOINT::hadr_endpoint TO [AGNode2_login];

GO

Next, run the following Transact-SQL (T-SQL) query on the second node, such as
AGNode2 :

SQL

--create a login for the AGNode1

USE master;

CREATE LOGIN AGNode1_Login WITH PASSWORD = 'PassWord123!';

GO

--create a user from the login

CREATE USER AGNode1_User FOR LOGIN AGNode1_Login;

GO

--create a certificate that the login uses for authentication

CREATE CERTIFICATE AGNode1Cert

AUTHORIZATION AGNode1_User

FROM FILE = 'C:\certs\AGNode1Cert.crt'

GO

--grant connect for login

GRANT CONNECT ON ENDPOINT::hadr_endpoint TO [AGNode1_login];

GO

If there are any other nodes in the cluster, repeat these steps there also, modifying the
respective certificate and user names.

Configure an availability group


In this step, configure your availability group, and add your databases to it. Do not
create a listener at this time. If you're not familiar with the steps, see the availability
group tutorial. Be sure to initiate a failover and failback to verify that everything is
working as it should be.

7 Note

If there is a failure during the synchronization process, you may need to grant NT
AUTHORITY\SYSTEM sysadmin rights to create cluster resources on the first node, such

as AGNode1 temporarily.

Configure a load balancer


In this final step, configure the load balancer using either the Azure portal or PowerShell.

However, there may be some limitations when using the Windows Cluster GUI, and as
such, you should use PowerShell to create a client access point or the network name for
your listener with the following example script:

PowerShell

Add-ClusterResource -Name "IPAddress1" -ResourceType "IP Address" -Group


"WGAG"

Get-ClusterResource -Name IPAddress1 | Set-ClusterParameter -Multiple


@{"Network" = "Cluster Network 1";"Address" = "10.0.0.4";"SubnetMask" =
"255.0.0.0";"EnableDHCP" = 0}

Add-ClusterResource -Name "IPAddress2" -ResourceType "IP Address" -Group


"WGAG"

Get-ClusterResource -Name IPAddress2 | Set-ClusterParameter -Multiple


@{"Network" = "Cluster Network 2";"Address" = "10.0.0.5";"SubnetMask" =
"255.0.0.0";"EnableDHCP" = 0}

Add-ClusterResource -Name "TestName" -Group "WGAG" -ResourceType "Network


Name"

Get-ClusterResource -Name "TestName" | Set-ClusterParameter -Multiple


@{"DnsName" = "TestName";"RegisterAllProvidersIP" = 1}

Set-ClusterResourceDependency -Resource TestName -Dependency "[IPAddress1]


or [IPAddress2]"

Start-ClusterResource -Name TestName -Verbose

Next steps
Once the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
Tutorial: Prerequisites for single-subnet
availability groups - SQL Server on
Azure VMs
Article • 04/18/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This tutorial shows how to complete the prerequisites for creating a SQL Server Always
On availability group on Azure virtual machines within a single subnet. When you've
completed the prerequisites, you'll have a domain controller, two SQL Server VMs, and a
witness server in a single resource group.

This article manually configures the availability group environment. It's also possible to
automate the steps by using the Azure portal, PowerShell or the Azure CLI, or Azure
quickstart templates.

Time estimate: It might take a couple of hours to complete the prerequisites. You'll
spend much of this time creating virtual machines.

The following diagram illustrates what you build in the tutorial.


7 Note

It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs by using Azure Migrate. To learn more, see Migrate an availability
group.

Review availability group documentation


This tutorial assumes that you have a basic understanding of SQL Server Always On
availability groups. If you're not familiar with this technology, see Overview of Always On
availability groups (SQL Server).

Create an Azure account


You need an Azure account. You can open a free Azure account or activate Visual
Studio subscriber benefits.

Create a resource group


To create the resource group in the Azure portal, follow these steps:

1. Sign in to the Azure portal .

2. Select + Create a resource.


3. Search for resource group in the Marketplace search box, and then choose the
Resource group tile from Microsoft. Select Create.

4. On the Create a resource group page, fill out the values to create the resource
group:
a. Choose the appropriate Azure subscription from the dropdown list.
b. Provide a name for your resource group, such as SQL-HA-RG.
c. Choose a region from the dropdown list, such as West US 2. Be sure to deploy
all subsequent resources to this location.
d. Select Review + create to review your resource parameters, and then select
Create to create your resource group.

Create the network and subnet


The next step is to create the network and subnet in the Azure resource group.

The solution in this tutorial uses one virtual network and one subnet. The virtual network
overview provides more information about networks in Azure.

To create the virtual network in the Azure portal, follow these steps:

1. Go to your resource group in the Azure portal and select + Create.

2. Search for virtual network in the Marketplace search box, and then choose the
Virtual network tile from Microsoft. Select Create.

3. On the Create virtual network page, enter the following information on the Basics
tab:
a. Under Project details, for Subscription, choose the appropriate Azure
subscription. For Resource group, select the resource group that you created
previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
autoHAVNET. In the dropdown list, choose the same region that you chose for
your resource group.

4. On the IP addresses tab, select the ellipsis (...) next to + Add a subnet. Select
Delete address space to remove the existing address space, if you need a different
address range.
5. Select Add an IP address space to open the pane to create the address space that
you need. This tutorial uses the address space of 192.168.0.0/16 (192.168.0.0 for
Starting address and /16 for Address space size). Select Add to create the address
space.

6. Select + Add a subnet, and then:

a. Provide a value for Subnet name, such as admin.

b. Provide a unique subnet address range within the virtual network address
space.

For example, if your address range is 192.168.0.0/16, enter 192.168.15.0 for


Starting address and /24 for Subnet size.
c. Select Add to add your new subnet.

7. Select Review + Create.

Azure returns you to the portal dashboard and notifies you when the new network is
created.

Create availability sets


Before you create virtual machines, you need to create availability sets. Availability sets
reduce the downtime for planned or unplanned maintenance events.

An Azure availability set is a logical group of resources that Azure places on these
physical domains:

Fault domain: Ensures that the members of the availability set have separate
power and network resources.
Update domain: Ensures that members of the availability set aren't brought down
for maintenance at the same time.

For more information, see Manage the availability of virtual machines.

You need two availability sets. One is for the domain controllers. The second is for the
SQL Server VMs.

To create an availability set:

1. Go to the resource group and select Add.


2. Filter the results by typing availability set. Select Availability Set in the results.
3. Select Create.

Configure two availability sets according to the parameters in the following table:

Field Domain controller availability set SQL Server availability set

Name adavailabilityset sqlavailabilityset

Resource group SQL-HA-RG SQL-HA-RG

Fault domains 3 3

Update domains 5 3

After you create the availability sets, return to the resource group in the Azure portal.

Create domain controllers


After you've created the network, subnet, and availability sets, you're ready to create
and configure domain controllers.

Create virtual machines for the domain controllers


Now, create two virtual machines. Name them ad-primary-dc and ad-secondary-dc.
Use the following steps for each VM:

1. Return to the SQL-HA-RG resource group.


2. Select Add.
3. Type Windows Server 2016 Datacenter, and then select Windows Server 2016
Datacenter.
4. In Windows Server 2016 Datacenter, verify that the deployment model is Resource
Manager, and then select Create.

7 Note

The ad-secondary-dc virtual machine is optional, to provide high availability for


Active Directory Domain Services.

The following table shows the settings for these two machines:

Field Value
Field Value

Name First domain controller: ad-primary-dc

Second domain controller: ad-secondary-dc

VM disk type SSD

User name DomainAdmin

Password Contoso!0000

Subscription Your subscription

Resource group SQL-HA-RG

Location Your location

Size DS1_V2

Storage Use managed disks - Yes

Virtual network autoHAVNET

Subnet admin

Public IP address Same name as the VM

Network security group Same name as the VM

Availability set adavailabilityset

Fault domains: 3

Update domains: 5

Diagnostics Enabled

Diagnostics storage account Automatically created

) Important

You can place a VM in an availability set only when you create it. You can't change
the availability set after a VM is created. See Manage the availability of virtual
machines.

Configure the primary domain controller


In the following steps, configure the ad-primary-dc machine as a domain controller for
corp.contoso.com:

1. In the portal, open the SQL-HA-RG resource group and select the ad-primary-dc
machine. On ad-primary-dc, select Connect to open a Remote Desktop Protocol
(RDP) file for remote desktop access.

2. Sign in with your configured administrator account (\DomainAdmin) and


password (Contoso!0000).

3. By default, the Server Manager dashboard should be displayed. Select the Add
roles and features link on the dashboard.

4. Select Next until you get to the Server Roles section.

5. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any features that these roles require.

7 Note

Windows warns you that there is no static IP address. If you're testing the
configuration, select Continue. For production scenarios, set the IP address to
static in the Azure portal, or use PowerShell to set the static IP address of the
domain controller machine.
6. Select Next until you reach the Confirmation section. Select the Restart the
destination server automatically if required checkbox.

7. Select Install.

8. After installation of the features finishes, return to the Server Manager dashboard.

9. Select the new AD DS option on the left pane.

10. Select the More link on the yellow warning bar.

11. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.

12. In the Active Directory Domain Services Configuration Wizard, use the following
values:
Page Setting

Deployment Configuration Add a new forest

Root domain name = corp.contoso.com

Domain Controller Options DSRM Password = Contoso!0000

Confirm Password = Contoso!0000

13. Select Next to go through the other pages in the wizard. On the Prerequisites
Check page, verify that the following message appears: "All prerequisite checks
passed successfully." You can review any applicable warning messages, but it's
possible to continue with the installation.

14. Select Install. The ad-primary-dc virtual machine automatically restarts.

Note the IP address of the primary domain controller


Use the primary domain controller for DNS. Note the primary domain controller's
private IP address.

One way to get the primary domain controller's IP address is through the Azure portal:

1. Open the resource group.

2. Select the primary domain controller.

3. On the primary domain controller, select Network interfaces.


Configure the virtual network DNS
After you create the first domain controller and enable DNS on the first server, configure
the virtual network to use this server for DNS:

1. In the Azure portal, select the virtual network.

2. Under Settings, select DNS Server.

3. Select Custom, and enter the private IP address of the primary domain controller.

4. Select Save.

Configure the secondary domain controller


After the primary domain controller restarts, you can use the following steps to
configure the secondary domain controller. This optional procedure is for high
availability.

Set preferred DNS server address

The preferred DNS server address should not be updated directly within a VM, it should
be edited from the Azure portal, or Powershell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:

1. Sign-in to the Azure portal .

2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.

3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.

4. In Settings, select DNS servers.

5. Select either:

Inherit from virtual network: Choose this option to inherit the DNS server
setting defined for the virtual network the network interface is assigned to.
This would automatically inherit the primary domain controller as the DNS
server.

Custom: You can configure your own DNS server to resolve names across
multiple virtual networks. Enter the IP address of the server you want to use
as a DNS server. The DNS server address you specify is assigned only to this
network interface and overrides any DNS setting for the virtual network the
network interface is assigned to. If you select custom, then input the IP
address of the primary domain controller, such as 192.168.15.4 .

6. Select Save. If using a Custom DNS Server, return to the virtual machine in the
Azure portal and restart the VM. Once the virtual machine has restarted, you can
join the VM to the domain.

Join the domain


Next, join the corp.contoso.com domain. To do so, follow these steps:

1. Remotely connect to the virtual machine using the BUILTIN\DomainAdmin


account. This account is the same one used when creating the domain controller
virtual machines.
2. Open Server Manager, and select Local Server.
3. Select WORKGROUP.
4. In the Computer Name section, select Change.
5. Select the Domain checkbox and type corp.contoso.com in the text box. Select
OK.
6. In the Windows Security popup dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the popup dialog.

Configure domain controller

Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:

1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).

2. Select the Add roles and features link on the dashboard.


3. Select Next until you get to the Server Roles section.

4. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.

5. After the features finish installing, return to the Server Manager dashboard.

6. Select the new AD DS option on the left-hand pane.

7. Select the More link on the yellow warning bar.

8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.

9. Under Deployment Configuration, select Add a domain controller to an existing


domain.

10. Click Select.

11. Connect by using the administrator account


(CORP.CONTOSO.COM\domainadmin) and password (Contoso!0000).

12. In Select a domain from the forest, choose your domain and then select OK.

13. In Domain Controller Options, use the default values and set a DSRM password.

7 Note

The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.
14. Select Next until the dialog reaches the Prerequisites check. Then select Install.

After the server finishes the configuration changes, restart the server.

Add the private IP address of the secondary domain


controller to the VPN DNS server
In the Azure portal, under Virtual network, change the DNS server to include the IP
address of the secondary domain controller. This setting allows the DNS service
redundancy.

Configure the domain accounts


Next, configure two accounts in total in Active Directory, one installation account and
then a service account for both SQL Server VMs. For example, use the values in the
following table for the accounts:

Account VM Full domain Description


name

Install Both Corp\Install Log into either VM with this account to configure
the cluster and availability group.

SQLSvc Both (sqlserver-0 Corp\SQLSvc Use this account for the SQL Server service and
and sqlserver-1) SQL Agent Service account on the both SQL Server
VMs.

Use the following steps to create each account:

1. Sign in to the ad-primary-dc machine.

2. In Server Manager, select Tools, and then select Active Directory Administrative
Center.

3. Select corp (local) from the left pane.

4. On the Tasks pane, select New, and then select User.


 Tip

Set a complex password for each account. For non-production environments,


set the user account to never expire.

5. Select OK to create the user.

Grant the required permissions to the installation account


1. In Active Directory Administrative Center, select corp (local) on the left pane. On
the Tasks pane, select Properties.

2. Select Extensions, and then select the Advanced button on the Security tab.

3. In the Advanced Security Settings for corp dialog, select Add.


4. Choose Select a principal, search for CORP\Install, and then select OK.

5. Select the Read all properties checkbox.

6. Select the Create Computer objects checkbox.

7. Select OK, and then select OK again. Close the corp properties window.

Now that you've finished configuring Active Directory and the user objects, you can
create additional VMs that you'll join to the domain.

Create SQL Server VMs


The solution in this tutorial requires you to create three virtual machines: two with SQL
Server instances and one that functions as a witness.
Windows Server 2016 can use a cloud witness. But for consistency with previous
operating systems, this article uses a virtual machine for a witness.

Before you proceed, consider the following design decisions:

Storage: Azure managed disks

For the virtual machine storage, use Azure managed disks. We recommend
managed disks for SQL Server virtual machines. Managed disks handle storage
behind the scenes. In addition, when virtual machines with managed disks are in
the same availability set, Azure distributes the storage resources to provide
appropriate redundancy.

For more information, see Introduction to Azure managed disks. For specifics
about managed disks in an availability set, see Availability options for Azure virtual
machines.

Network: Private IP addresses in production

For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to a virtual machine over the internet and
makes configuration steps easier. In production environments, we recommend
only private IP addresses to reduce the vulnerability footprint of the SQL Server
instance's VM resource.

Network: Number of NICs per server

Use a single network interface card (NIC) per server (cluster node) and a single
subnet. Azure networking has physical redundancy, which makes additional NICs
and subnets unnecessary on an Azure VM guest cluster.

The cluster validation report will warn you that the nodes are reachable only on a
single network. You can ignore this warning on Azure VM guest failover clusters.

Create and configure the VMs


1. Go back to the SQL-HA-RG resource group, and then select Add.

2. Search for the appropriate gallery item, select Virtual Machine, and then select
From Gallery.

3. Use the information in the following table to finish creating the three VMs:

Page VM1 VM2 VM3


Page VM1 VM2 VM3

Select the Windows Server 2016 SQL Server 2016 SP1 SQL Server 2016 SP1
appropriate Datacenter Enterprise on Enterprise on
gallery item Windows Server 2016 Windows Server 2016

Virtual Name = cluster-fsw


Name = sqlserver-0
Name = sqlserver-1

machine
configuration: User Name = User Name = User Name =
Basics DomainAdmin
DomainAdmin
DomainAdmin

Password = Password = Password =


Contoso!0000
Contoso!0000
Contoso!0000

Subscription = Your Subscription = Your Subscription = Your


subscription
subscription
subscription

Resource group = Resource group = Resource group =


SQL-HA-RG
SQL-HA-RG
SQL-HA-RG

Location = Your Azure Location = Your Azure Location = Your Azure


location location location

Virtual SIZE = DS1_V2 (1 SIZE = DS2_V2 (2 SIZE = DS2_V2 (2


machine vCPU, 3.5 GB) vCPUs, 7 GB)
vCPUs, 7 GB)
configuration:
Size The size must support
SSD storage (premium
disk support).
Page VM1 VM2 VM3

Virtual Storage = Use Storage = Use Storage = Use


machine managed disks
managed disks
managed disks

configuration:
Settings Virtual network = Virtual network = Virtual network =
autoHAVNET
autoHAVNET
autoHAVNET

Subnet = admin Subnet = admin Subnet = admin


(192.168.15.0/24)
(192.168.15.0/24)
(192.168.15.0/24)

Public IP address = Public IP address = Public IP address =


Automatically Automatically Automatically
generated
generated
generated

Network security Network security Network security


group = None
group = None
group = None

Monitoring Monitoring Monitoring


Diagnostics = Enabled
Diagnostics = Enabled
Diagnostics = Enabled

Diagnostics storage Diagnostics storage Diagnostics storage


account = Use an account = Use an account = Use an
automatically automatically automatically
generated storage generated storage generated storage
account
account
account

Availability set = Availability set = Availability set =


sqlAvailabilitySet
sqlAvailabilitySet
sqlAvailabilitySet

Page VM1 VM2 VM3

Virtual Not applicable SQL connectivity = SQL connectivity =


machine Private (within virtual Private (within virtual
configuration: network)
network)

SQL Server
settings Port = 1433
Port = 1433

SQL Authentication = SQL Authentication =


Disabled
Disabled

Storage configuration Storage configuration


= General
= General

Automated patching = Automated patching =


Sunday at 2:00
Sunday at 2:00

Automated backup = Automated backup =


Disabled
Disabled

Azure Key Vault Azure Key Vault


integration = Disabled integration = Disabled

7 Note

The machine sizes suggested here are meant for testing availability groups in Azure
virtual machines. For the best performance on production workloads, see the
recommendations for SQL Server machine sizes and configuration in Performance
best practices for SQL Server in Azure virtual machines.

After the three VMs are fully provisioned, you need to join them to the
corp.contoso.com domain and grant CORP\Install administrative rights to the
machines.

Join the servers to the domain


Complete the following steps for both the SQL Server VMs and the file share witness
server:

1. Remotely connect to the virtual machine with BUILTIN\DomainAdmin.


2. In Server Manager, select Local Server.
3. Select the WORKGROUP link.
4. In the Computer Name section, select Change.
5. Select the Domain checkbox, and enter corp.contoso.com in the text box. Select
OK.
6. In the Windows Security popup dialog, specify the credentials for the default
domain administrator account (CORP\DomainAdmin) and the password
(Contoso!0000).
7. When you see the "Welcome to the corp.contoso.com domain" message, select
OK.
8. Select Close, and then select Restart Now in the popup dialog.

Add accounts
Add the installation account as an administrator on each VM, grant permission to the
installation account and local accounts within SQL Server, and update the SQL Server
service account.

Add the CORP\Install user as an administrator on each


cluster VM
After each virtual machine restarts as a member of the domain, add CORP\Install as a
member of the local administrators group:

1. Wait until the VM is restarted, and then open the RDP file again from the primary
domain controller. Sign in to sqlserver-0 by using the CORP\DomainAdmin
account.

 Tip

Be sure to sign in with the domain administrator account. In the previous


steps, you were using the BUILTIN administrator account. Now that the server
is in the domain, use the domain account. In your RDP session, specify
DOMAIN\username.

2. In Server Manager, select Tools, and then select Computer Management.

3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.

4. Double-click the Administrators group.

5. In the Administrators Properties dialog, select the Add button.

6. Enter the user CORP\Install, and then select OK.


7. Select OK to close the Administrator Properties dialog.

8. Repeat the previous steps on sqlserver-1 and cluster-fsw.

Create a sign-in on each SQL Server VM for the


installation account
Use the installation account (CORP\install) to configure the availability group. This
account needs to be a member of the sysadmin fixed server role on each SQL Server
VM.

The following steps create a sign-in for the installation account. Complete them on both
SQL Server VMs.

1. Connect to the server through RDP by using the <MachineName>\DomainAdmin


account.

2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.

3. In Object Explorer, select Security.

4. Right-click Logins. Select New Login.

5. In Login - New, select Search.

6. Select Locations.

7. Enter the network credentials for the domain administrator. Use the installation
account (CORP\install).

8. Set the sign-in to be a member of the sysadmin fixed server role.

9. Select OK.

Configure system account permissions


To create an account for the system and grant appropriate permissions, complete the
following steps on each SQL Server instance:

1. Create an account for [NT AUTHORITY\SYSTEM] by using the following script:

SQL

USE [master]

GO

CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE=


[master]

GO

2. Grant the following permissions to [NT AUTHORITY\SYSTEM] :

ALTER ANY AVAILABILITY GROUP


CONNECT SQL

VIEW SERVER STATE

The following script grants these permissions:

SQL

GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]

GO

GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]

GO

GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]

GO

Set the SQL Server service accounts


On each SQL Server VM, complete the following steps to set the SQL Server service
account. Use the accounts that you created when you configured the domain accounts.

1. Open SQL Server Configuration Manager.


2. Right-click the SQL Server service, and then select Properties.
3. Set the account and password.

For SQL Server availability groups, each SQL Server VM needs to run as a domain
account.

Add failover clustering


To add failover clustering features, complete the following steps on both SQL Server
VMs:

1. Connect to the SQL Server virtual machine through RDP by using the CORP\install
account. Open the Server Manager dashboard.

2. Select the Add roles and features link on the dashboard.


3. Select Next until you get to the Server Features section.

4. In Features, select Failover Clustering.

5. Add any required features.

6. Select Install.

7 Note

You can now automate this task, along with actually joining the SQL Server VMs to
the failover cluster, by using the Azure CLI and Azure quickstart templates.

Tune network thresholds for a failover cluster


When you're running Windows failover cluster nodes in Azure VMs with SQL Server
availability groups, change the cluster setting to a more relaxed monitoring state. This
change will make the cluster more stable and reliable. For details, see IaaS with SQL
Server: Tuning failover cluster network thresholds.

Configure the firewall on each SQL Server VM


The solution requires the following TCP ports to be open in the firewall:

SQL Server VM: Port 1433 for a default instance of SQL Server.
Azure load balancer probe: Any available port. Examples frequently use 59999.
Load balancer IP address health probe for cluster core: Any available port.
Examples frequently use 58888.
Database mirroring endpoint: Any available port. Examples frequently use 5022.

The firewall ports need to be open on both SQL Server VMs. The method of opening the
ports depends on the firewall solution that you use. The following steps show how to
open the ports in Windows Firewall:

1. On the first SQL Server Start screen, open Windows Firewall with Advanced
Security.

2. On the left pane, select Inbound Rules. On the right pane, select New Rule.

3. For Rule Type, select Port.

4. For the port, specify TCP and enter the appropriate port numbers. The following
screenshot shows an example:

5. Select Next.

6. On the Action page, keep Allow the connection selected, and then select Next.

7. On the Profile page, accept the default settings, and then select Next.

8. On the Name page, specify a rule name (such as Azure LB Probe) in the Name
box, and then select Finish.
Next steps
Now that you've configured the prerequisites, get started with configuring your
availability group.

To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Tutorial: Manually configure an
availability group - SQL Server on Azure
VMs
Article • 04/18/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This tutorial shows how to create an Always On availability group for SQL Server on
Azure VMs within a single subnet. The complete tutorial creates an availability group
with a database replica on two SQL Server instances.

This article manually configures the availability group environment. It's also possible to
automate the steps by using the Azure portal, PowerShell or the Azure CLI, or Azure
Quickstart Templates.

Time estimate: This tutorial takes about 30 minutes to complete after you meet the
prerequisites.

Prerequisites
The tutorial assumes that you have a basic understanding of SQL Server Always On
availability groups. If you need more information, see Overview of Always On availability
groups (SQL Server).

Before you begin the procedures in this tutorial, you need to complete prerequisites for
creating Always On availability groups in Azure virtual machines. If you completed these
prerequisites already, you can jump to Create the cluster.

The following table summarizes the prerequisites that you need before you can
complete this tutorial:
Requirement Description


Two SQL - In an Azure availability set

Server instances - In a single domain

- With failover clustering installed


Windows File share for a cluster witness
Server


SQL Server Domain account
service account


SQL Server Domain account
Agent service
account


Firewall ports - SQL Server: 1433 for a default instance

open - Database mirroring endpoint: 5022 or any available port

- Load balancer IP address health probe for an availability group: 59999 or any
available port

- Load balancer IP address health probe for cluster core: 58888 or any
available port


Failover Required for both SQL Server instances
clustering


Installation - Local administrator on each SQL Server instance

domain account - Member of the sysadmin fixed server role for each SQL Server instance


Network If the environment is using Network security groups, ensure that the current
Security Groups configuration allows Network traffic through ports described in Configure the
(NSGs) firewall.

Create the cluster


The first task is to create a Windows Server failover cluster with both SQL Server VMs
and a witness server:

1. Use Remote Desktop Protocol (RDP) to connect to the first SQL Server VM. Use a
domain account that's an administrator on both SQL Server VMs and the witness
server.

 Tip

In the prerequisites, you created an account called CORP\Install. Use this


account.
2. On the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.

3. On the left pane, right-click Failover Cluster Manager, and then select Create
Cluster.

4. In the Create Cluster Wizard, create a one-node cluster by stepping through the
pages with the settings in the following table:

Page Setting

Before You Use defaults.


Begin

Select Servers Enter the first SQL Server VM name in Enter server name, and then
select Add.

Validation Select No. I do not require support from Microsoft for this cluster, and
Warning therefore do not want to run the validation tests. When I select Next,
continue Creating the cluster.

Access Point for In Cluster Name, enter a cluster name (for example, SQLAGCluster1).
Administering
the Cluster

Confirmation Use defaults unless you're using Storage Spaces.

Set the Windows Server failover cluster's IP address

7 Note
On Windows Server 2019, the cluster creates a Distributed Server Name value
instead of the Cluster Network Name value. If you're using Windows Server 2019,
skip any steps that refer to the cluster core name in this tutorial. You can create a
cluster network name by using PowerShell. For more information, review the blog
post Failover Cluster: Cluster Network Object .

1. In Failover Cluster Manager, scroll down to Cluster Core Resources and expand
the cluster details. Both the Name and IP Address resources should be in the
Failed state.

The IP address resource can't be brought online because the cluster is assigned the
same IP address as the machine itself. It's a duplicate address.

2. Right-click the failed IP Address resource, and then select Properties.


3. Select Static IP Address. Specify an available address from the same subnet as
your virtual machines.

4. In the Cluster Core Resources section, right-click the cluster name and select Bring
Online. Wait until both resources are online.

When the cluster name resource comes online, it updates the domain controller
server with a new Active Directory computer account. Use this Active Directory
account to run the availability group's clustered service later.

Add the other SQL Server instance to the cluster


1. In the browser tree, right-click the cluster and select Add Node.

2. In the Add Node Wizard, select Next.

3. On the Select Servers page, add the second SQL Server VM. Enter the VM name in
Enter server name, and then select Add > Next.

4. On the Validation Warning page, select No. (In a production scenario, you should
perform the validation tests.) Then, select Next.

5. On the Confirmation page, if you're using Storage Spaces, clear the Add all
eligible storage to the cluster checkbox.
2 Warning

If you don't clear Add all eligible storage to the cluster, Windows detaches
the virtual disks during the clustering process. As a result, they don't appear in
Disk Manager or Object Explorer until the storage is removed from the
cluster and reattached via PowerShell.

6. Select Next.

7. Select Finish.

Failover Cluster Manager shows that your cluster has a new node and lists it in the
Nodes container.

8. Sign out of the remote desktop session.

Add a file share for a cluster quorum


In this example, the Windows cluster uses a file share to create a cluster quorum. This
tutorial uses a NodeAndFileShareMajority quorum. For more information, see Configure
and manage quorum.

1. Connect to the file share witness server VM by using a remote desktop session.

2. In Server Manager, select Tools. Open Computer Management.


3. Select Shared Folders.

4. Right-click Shares, and then select New Share.

Use the Create a Shared Folder Wizard to create a share.

5. On the Folder Path page, select Browse. Locate or create a path for the shared
folder, and then select Next.

6. On the Name, Description, and Settings page, verify the share name and path.
Select Next.

7. On the Shared Folder Permissions page, set Customize permissions. Select


Custom.

8. In the Customize Permissions dialog, select Add.

9. Make sure that the account that's used to create the cluster has full control.
10. Select OK.

11. On the Shared Folder Permissions page, select Finish. Then select Finish again.

12. Sign out of the server.

Configure the cluster quorum

7 Note

Depending on the configuration of your availability group, it might be necessary to


change the quorum vote of a node that's participating in the Windows Server
failover cluster. For more information, see Configure cluster quorum for SQL
Server on Azure VMs.

1. Connect to the first cluster node by using a remote desktop session.

2. In Failover Cluster Manager, right-click the cluster, point to More Actions, and
then select Configure Cluster Quorum Settings.
3. In the Configure Cluster Quorum Wizard, select Next.

4. On the Select Quorum Configuration Option page, choose Select the quorum
witness, and then select Next.

5. On the Select Quorum Witness page, select Configure a file share witness.

 Tip

Windows Server 2016 supports a cloud witness. If you choose this type of
witness, you don't need a file share witness. For more information, see Deploy
a cloud witness for a failover cluster. This tutorial uses a file share witness,
which previous operating systems support.

6. In Configure File Share Witness, enter the path for the share that you created.
Then select Next.

7. On the Confirmation page, verify the settings. Then select Next.

8. Select Finish.

The cluster core resources are configured with a file share witness.

Enable availability groups


Next, enable Always On availability groups. Complete these steps on both SQL Server
VMs.
1. From the Start screen, open SQL Server Configuration Manager.

2. In the browser tree, select SQL Server Services. Then right-click the SQL Server
(MSSQLSERVER) service and select Properties.

3. Select the Always On High Availability tab, and then select Enable Always On
availability groups.

4. Select Apply. Select OK in the pop-up dialog.

5. Restart the SQL Server service.

Create a database on the first SQL Server


instance
1. Open the RDP file to the first SQL Server VM with a domain account that's a
member of sysadmin fixed server role.
2. Open SQL Server Management Studio (SSMS) and connect to the first SQL Server
instance.
3. In Object Explorer, right-click Databases and select New Database.
4. In Database name, enter MyDB1, and then select OK.
Create a backup share
1. On the first SQL Server VM in Server Manager, select Tools. Open Computer
Management.

2. Select Shared Folders.

3. Right-click Shares, and then select New Share.

Use the Create a Shared Folder Wizard to create a share.

4. On the Folder Path page, select Browse. Locate or create a path for the database
backup's shared folder, and then select Next.

5. On the Name, Description, and Settings page, verify the share name and path.
Then select Next.

6. On the Shared Folder Permissions page, set Customize permissions. Then select
Custom.

7. In the Customize Permissions dialog, select Add.

8. Make sure that the accounts for the SQL Server and SQL Server Agent service on
both servers have full control.
9. Select OK.

10. On the Shared Folder Permissions page, select Finish. Select Finish again.

Take a full backup of the database


You need to back up the new database to initialize the log chain. If you don't take a
backup of the new database, it can't be included in an availability group.

1. In Object Explorer, right-click the database, point to Tasks, and then select Back
Up.

2. Select OK to take a full backup to the default backup location.

Create an availability group


You're now ready to create and configure an availability group by doing the following
tasks:

Create a database on the first SQL Server instance.


Take both a full backup and a transaction log backup of the database.
Restore the full and log backups to the second SQL Server instance by using the NO
RECOVERY option.
Create the availability group (MyTestAG) with synchronous commit, automatic
failover, and readable secondary replicas.

Create the availability group


1. Connect to your SQL Server VM by using remote desktop, and open SQL Server
Management Studio.

2. In Object Explorer in SSMS, right-click Always On High Availability and select New
Availability Group Wizard.

3. On the Introduction page, select Next. On the Specify Availability Group Options
page, enter a name for the availability group in the Availability group name box.
For example, enter MyTestAG. Then select Next.
4. On the Select Databases page, select your database, and then select Next.

7 Note

The database meets the prerequisites for an availability group because you've
taken at least one full backup on the intended primary replica.

5. On the Specify Replicas page, select Add Replica.


6. In the Connect to Server dialog, for Server name, enter the name of the second
SQL Server instance. Then select Connect.

Back on the Specify Replicas page, you should now see the second server listed
under Availability Replicas. Configure the replicas as follows.

7. Select Endpoints to see the database mirroring endpoint for this availability group.
Use the same port that you used when you set the firewall rule for database
mirroring endpoints.

8. On the Select Initial Data Synchronization page, select Full and specify a shared
network location. For the location, use the backup share that you created. In the
example, it was \\<First SQL Server Instance>\Backup\. Select Next.
7 Note

Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, we
don't recommend full synchronization because it might take a long time.

You can reduce this time by manually taking a backup of the database and
restoring it with NO RECOVERY . If the database is already restored with NO
RECOVERY on the second SQL Server instance before you configure the
availability group, select Join only. If you want to take the backup after
configuring the availability group, select Skip initial data synchronization.

9. On the Validation page, select Next. This page should look similar to the following
image:
7 Note

There's a warning for the listener configuration because you haven't


configured an availability group listener. You can ignore this warning because
on Azure virtual machines, you create the listener after you create the Azure
load balancer.

10. On the Summary page, select Finish, and then wait while the wizard configures the
new availability group. On the Progress page, you can select More details to view
the detailed progress.

After the wizard finishes the configuration, inspect the Results page to verify that
the availability group is successfully created.
11. Select Close to close the wizard.

Check the availability group


1. In Object Explorer, expand Always On High Availability, and then expand
Availability Groups. You should now see the new availability group in this
container. Right-click the availability group and select Show Dashboard.
Your availability group dashboard should look similar to the following screenshot:

The dashboard shows the replicas, the failover mode of each replica, and the
synchronization state.

2. In Failover Cluster Manager, select your cluster. Select Roles.

The availability group name that you used is a role on the cluster. That availability
group doesn't have an IP address for client connections because you didn't
configure a listener. You'll configure the listener after you create an Azure load
balancer.
2 Warning

Don't try to fail over the availability group from Failover Cluster Manager. All
failover operations should be performed on the availability group dashboard
in SSMS. Learn more about restrictions on using Failover Cluster Manager
with availability groups.

At this point, you have an availability group with two SQL Server replicas. You can move
the availability group between instances. You can't connect to the availability group yet
because you don't have a listener.

In Azure virtual machines, the listener requires a load balancer. The next step is to create
the load balancer in Azure.

Create an Azure load balancer

7 Note

Availability group deployments to multiple subnets don't require a load balancer.


In a single-subnet environment, customers who use SQL Server 2019 CU8 and later
on Windows 2016 and later can replace the traditional virtual network name (VNN)
listener and Azure Load Balancer with a distributed network name (DNN) listener.
If you want to use a DNN, skip any tutorial steps that configure Azure Load
Balancer for your availability group.

On Azure virtual machines in a single subnet, a SQL Server availability group requires a
load balancer. The load balancer holds the IP addresses for the availability group
listeners and the Windows Server failover cluster. This section summarizes how to create
the load balancer in the Azure portal.

A load balancer in Azure can be either standard or basic. A standard load balancer has
more features than the basic load balancer. For an availability group, the standard load
balancer is required if you use an availability zone (instead of an availability set). For
details on the difference between the SKUs, see Azure Load Balancer SKUs.

) Important

On September 30, 2025, the Basic SKU for Azure Load Balancer will be retired. For
more information, see the official announcement . If you're currently using Basic
Load Balancer, upgrade to Standard Load Balancer before the retirement date. For
guidance, review Upgrade Load Balancer.

1. In the Azure portal, go to the resource group that contains your SQL Server VMs
and select + Add.

2. Search for load balancer. Choose the load balancer that Microsoft publishes.
3. Select Create.

4. On the Create load balancer page, configure the following parameters for the load
balancer:

Setting Entry or selection

Subscription Use the same subscription as the virtual machine.

Resource group Use the same resource group as the virtual machine.

Name Use a text name for the load balancer, such as sqlLB.

Region Use the same region as the virtual machine.

SKU Select Standard.

Type Select Internal.

The page should look like this:

5. Select Next: Frontend IP configuration.


6. Select + Add a frontend IP configuration.

7. Set up the frontend IP address by using the following values:

Name: Enter a name that identifies the frontend IP configuration.


Virtual network: Select the same network as the virtual machines.
Subnet: Select the same subnet as the virtual machines.
Assignment: Select Static.
IP address: Use an available address from the subnet. Use this address for
your availability group listener. This address is different from your cluster IP
address.
Availability zone: Optionally, choose an availability zone to deploy your IP
address to.

The following image shows the Add frontend IP configuration dialog:


8. Select Add.

9. Choose Review + Create to validate the configuration. Then select Create to create
the load balancer and the frontend IP address.

To configure the load balancer, you need to create a backend pool, create a probe, and
set the load-balancing rules.

Add a backend pool for the availability group listener


1. In the Azure portal, go to your availability group. You might need to refresh the
view to see the newly created load balancer.

2. Select the load balancer, select Backend pools, and then select +Add.

3. For Name, provide a name for the backend pool.

4. For Backend Pool Configuration, select NIC.

5. Select Add to associate the backend pool with the availability set that contains the
VMs.

6. Under Virtual machine, choose the virtual machines that will host availability
group replicas. Don't include the file share witness server.

7 Note

If both virtual machines are not specified, only connections to the primary
replica will succeed.

7. Select Add to add the virtual machines to the backend pool.

8. Select Save to create the backend pool.

Set the probe


1. In the Azure portal, select the load balancer, select Health probes, and then select
+Add.

2. Set the listener health probe as follows:

Setting Description Example

Name Text SQLAlwaysOnEndPointProbe

Protocol Choose TCP TCP


Setting Description Example

Port Any unused port 59999

Interval The amount of time between probe attempts, in 5


seconds

3. Select Add.

Set the load balancing rules


1. In the Azure portal, select the load balancer, select Load balancing rules, and then
select +Add.

2. Set the listener's load-balancing rules as follows:

Setting Description Example

Name Text SQLAlwaysOnEndPointListener

Frontend IP Choose an address Use the address that you created when
address you created the load balancer.

Backend pool Choose the backend pool Select the backend pool that contains the
virtual machines targeted for the load
balancer.

Protocol Choose TCP TCP

Port Use the port for the 1433


availability group listener

Backend Port This field isn't used when a 1433


floating IP is set for direct
server return

Health Probe The name that you specified SQLAlwaysOnEndPointProbe


for the probe

Session Dropdown list None


Persistence

Idle Timeout Minutes to keep a TCP 4


connection open

Floating IP A flow topology and an IP Enabled


(direct server address mapping scheme
return)
2 Warning

Direct server return is set during creation. You can't change it.

3. Select Save.

Add the cluster core IP address for the Windows Server


failover cluster
The IP address for the Windows Server failover cluster also needs to be on the load
balancer. If you're using Windows Server 2019, skip this process because the cluster
creates a Distributed Server Name value instead of the Cluster Network Name value.

1. In the Azure portal, go to the same Azure load balancer. Select Frontend IP
configuration, and then select +Add. Use the IP address that you configured for
the Windows Server failover cluster in the cluster core resources. Set the IP address
as Static.

2. On the load balancer, select Health probes, and then select +Add.

3. Set the cluster core IP address health probe for the Windows Server failover cluster
as follows:

Setting Description Example

Name Text WSFCEndPointProbe

Protocol Choose TCP TCP

Port Any unused port 58888

Interval The amount of time between probe attempts, in 5


seconds

4. Select Add to set the health probe.

5. Select Load balancing rules, and then select +Add.

6. Set the load-balancing rules for the cluster core IP address as follows:

Setting Description Example

Name Text WSFCEndPoint


Setting Description Example

Frontend Choose an address Use the address that you created when you
IP address configured the IP address for the Windows
Server failover cluster. This is different from
the listener IP address.

Backend Choose the backend pool Select the backend pool that contains the
pool virtual machines targeted for the load
balancer.

Protocol Choose TCP TCP

Port Use the port for the cluster IP 58888


address. This is an available
port that isn't used for the
listener probe port.

Backend This field isn't used when a 58888


Port floating IP is set for direct
server return

Probe The name that you specified WSFCEndPointProbe


for the probe

Session Dropdown list None


Persistence

Idle Minutes to keep a TCP 4


Timeout connection open

Floating IP A flow topology and an IP Enabled


(direct address mapping scheme
server
return)

2 Warning

Direct server return is set during creation. You can't change it.

7. Select OK.

Configure the listener


The next thing to do is configure an availability group listener on the failover cluster.

7 Note
This tutorial shows how to create a single listener, with one IP address for the
internal load balancer. To create listeners by using one or more IP addresses, see
Configure one or more Always On availability group listeners.

The availability group listener is an IP address and network name that the SQL Server
availability group listens on. To create the availability group listener:

1. Get the name of the cluster network resource:

a. Use RDP to connect to the Azure virtual machine that hosts the primary replica.

b. Open Failover Cluster Manager.

c. Select the Networks node, and note the cluster network name. Use this name in
the $ClusterNetworkName variable in the PowerShell script. In the following image,
the cluster network name is Cluster Network 1:

2. Add the client access point. The client access point is the network name that
applications use to connect to the databases in an availability group.

a. In Failover Cluster Manager, expand the cluster name, and then select Roles.

b. On the Roles pane, right-click the availability group name, and then select Add
Resource > Client Access Point.
c. In the Name box, create a name for this new listener.
The name for the new
listener is the network name that applications use to connect to databases in the
SQL Server availability group.

d. To finish creating the listener, select Next twice, and then select Finish. Don't
bring the listener or resource online at this point.

3. Take the cluster role for the availability group offline. In Failover Cluster Manager,
under Roles, right-click the role, and then select Stop Role.

4. Configure the IP resource for the availability group:

a. Select the Resources tab, and then expand the client access point that you
created. The client access point is offline.
b. Right-click the IP resource, and then select Properties. Note the name of the IP
address, and use it in the $IPResourceName variable in the PowerShell script.

c. Under IP Address, select Static IP Address. Set the IP address as the same
address that you used when you set the load balancer address on the Azure portal.
5. Make the SQL Server availability group dependent on the client access point:

a. In Failover Cluster Manager, select Roles, and then select your availability group.

b. On the Resources tab, under Other Resources, right-click the availability group
resource, and then select Properties.

c. On the Dependencies tab, add the name of the client access point (the listener).
d. Select OK.

6. Make the client access point dependent on the IP address:

a. In Failover Cluster Manager, select Roles, and then select your availability group.

b. On the Resources tab, right-click the client access point under Server Name,
and then select Properties.
c. Select the Dependencies tab. Verify that the IP address is a dependency. If it
isn't, set a dependency on the IP address. If multiple resources are listed, verify that
the IP addresses have OR, not AND, dependencies. Then select OK.
 Tip

You can validate that the dependencies are correctly configured. In Failover
Cluster Manager, go to Roles, right-click the availability group, select More
Actions, and then select Show Dependency Report. When the dependencies
are correctly configured, the availability group is dependent on the network
name, and the network name is dependent on the IP address.

7. Set the cluster parameters in PowerShell:

a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.

$ClusterNetworkName find the name in the Failover Cluster Manager by

selecting Networks, right-click the network and select Properties. The


$ClusterNetworkName is under Name on the General tab.
$IPResourceName is the name given to the IP Address resource in the Failover

Cluster Manager. This is found in the Failover Cluster Manager by selecting


Roles, select the SQL Server AG or FCI name, select the Resources tab under
Server Name, right-click the IP address resource and select Properties. The
correct value is under Name on the General tab.

$ListenerILBIP is the IP address that you created on the Azure load balancer

for the availability group listener. Find the $ListenerILBIP in the Failover
Cluster Manager on the same properties page as the SQL Server AG/FCI
Listener Resource Name.

$ListenerProbePort is the port that you configured on the Azure load

balancer for the availability group listener, such as 59999. Any unused TCP
port is valid.

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster network


name. Use Get-ClusterNetwork on Windows Server 2012 or later to find
the name.

$IPResourceName = "<IPResourceName>" # The IP address resource name.

$ListenerILBIP = "<n.n.n.n>" # The IP address of the internal load


balancer. This is the static IP address for the load balancer that you
configured in the Azure portal.

[int]$ListenerProbePort = <nnnnn>

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask
"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.

7 Note

If your SQL Server instances are in separate regions, you need to run the
PowerShell script twice. The first time, use the $ListenerILBIP and
$ListenerProbePort values from the first region. The second time, use the
$ListenerILBIP and $ListenerProbePort values from the second region. The

cluster network name and the cluster IP resource name are also different for
each region.
8. Bring the cluster role for the availability group online. In Failover Cluster Manager,
under Roles, right-click the role, and then select Start Role.

If necessary, repeat the preceding steps to set the cluster parameters for the IP address
of the Windows Server failover cluster:

1. Get the IP address name of the Windows Server failover cluster. In Failover Cluster
Manager, under Cluster Core Resources, locate Server Name.

2. Right-click IP Address, and then select Properties.

3. Copy the name of the IP address from Name. It might be Cluster IP Address.

4. Set the cluster parameters in PowerShell:

a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.

$ClusterCoreIP is the IP address that you created on the Azure load balancer
for the Windows Server failover cluster's core cluster resource. It's different
from the IP address for the availability group listener.

$ClusterProbePort is the port that you configured on the Azure load balancer

for the Windows Server failover cluster's health probe. It's different from the
probe for the availability group listener.

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster network


name. Use Get-ClusterNetwork on Windows Server 2012 or later to find
the name.

$IPResourceName = "<ClusterIPResourceName>" # The IP address resource


name.

$ClusterCoreIP = "<n.n.n.n>" # The IP address of the cluster IP


resource. This is the static IP address for the load balancer that you
configured in the Azure portal.

[int]$ClusterProbePort = <nnnnn> # The probe port from


WSFCEndPointprobe in the Azure portal. This port must be different from
the probe port for the availability group listener.

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ClusterCoreIP";"ProbePort"=$ClusterProbePort;"SubnetMask"
="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
If any SQL resource is configured to use a port between 49152 and 65536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Such resources might
include:

SQL Server database engine


Always On availability group listener
Health probe for the failover cluster instance
Database mirroring endpoint
Cluster core IP resource

Adding an exclusion will prevent other system processes from being dynamically
assigned to the same port. For this scenario, configure the following exclusions on all
cluster nodes:

netsh int ipv4 add excludedportrange tcp startport=58888 numberofports=1


store=persistent

netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1


store=persistent

It's important to configure the port exclusion when the port is not in use. Otherwise, the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions are configured correctly,
use the following command: netsh int ipv4 show excludedportrange tcp .

2 Warning

The port for the availability group listener's health probe has to be different from
the port for the cluster core IP address's health probe. In these examples, the
listener port is 59999 and the cluster core IP address's health probe port is 58888.
Both ports require an "allow inbound" firewall rule.

Set the listener port


In SQL Server Management Studio, set the listener port:

1. Open SQL Server Management Studio and connect to the primary replica.

2. Go to Always On High Availability > Availability groups > Availability group


listeners.

3. Right-click the listener name that you created in Failover Cluster Manager, and
then select Properties.
4. In the Port box, specify the port number for the availability group listener. The
default is 1433. Select OK.

You now have an availability group for SQL Server on Azure VMs running in Azure
Resource Manager mode.

Test the connection to the listener


To test the connection:

1. Use RDP to connect to a SQL Server VM that's in the same virtual network but
doesn't own the replica, such as the other replica.

2. Use the sqlcmd utility to test the connection. For example, the following script
establishes a sqlcmd connection to the primary replica through the listener by
using Windows authentication:

Windows Command Prompt

sqlcmd -S <listenerName> -E

If the listener is using a port other than the default port (1433), specify the port in
the connection string. For example, the following command connects to a listener
at port 1435:

Windows Command Prompt

sqlcmd -S <listenerName>,1435 -E

The sqlcmd utility automatically connects to whichever SQL Server instance is the
current primary replica of the availability group.

 Tip

Make sure that the port you specify is open on the firewall of both SQL Server VMs.
Both servers require an inbound rule for the TCP port that you use. For more
information, see Add or edit firewall rules.

Next steps
Add an IP address to a load balancer for a second availability group
Configure automatic or manual failover

To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Configure a load balancer & availability
group listener (SQL Server on Azure
VMs)
Article • 03/29/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article explains how to create a load balancer for a SQL Server Always On
availability group in Azure Virtual Machines within a single subnet that are running with
Azure Resource Manager. An availability group requires a load balancer when the SQL
Server instances are on Azure Virtual Machines. The load balancer stores the IP address
for the availability group listener. If an availability group spans multiple regions, each
region needs a load balancer.

To complete this task, you need to have a SQL Server Always On availability group
deployed in Azure VMs that are running with Resource Manager. Both SQL Server virtual
machines must belong to the same availability set. You can use the Microsoft template
to automatically create the availability group in Resource Manager. This template
automatically creates an internal load balancer for you.

If you prefer, you can manually configure an availability group.

This article requires that your availability groups are already configured.

View related articles:

Configure Always On availability groups in Azure VM (GUI)


Configure a VNet-to-VNet connection by using Azure Resource Manager and
PowerShell
By walking through this article, you create and configure a load balancer in the Azure
portal. After the process is complete, you configure the cluster to use the IP address
from the load balancer for the availability group listener.

Create & configure load balancer


In this portion of the task, do the following steps:

1. In the Azure portal, create the load balancer and configure the IP address.
2. Configure the back-end pool.
3. Create the probe.
4. Set the load-balancing rules.

7 Note

If the SQL Server instances are in multiple resource groups and regions, perform
each step twice, once in each resource group.

) Important

On September 30, 2025, the Basic SKU for the Azure Load Balancer will be retired.
For more information, see the official announcement . If you're currently using
Basic Load Balancer, upgrade to Standard Load Balancer prior to the retirement
date. For guidance, review upgrade load balancer.

Step 1: Create the load balancer and configure the IP


address
First, create the load balancer.

1. In the Azure portal, open the resource group that contains the SQL Server virtual
machines.

2. In the resource group, select + Create.

3. Search for load balancer. Choose Load Balancer (published by Microsoft) in the
search results.

4. On the Load Balancer blade, select Create.

5. Configure the following parameters for the load balancer.


Setting Field

Subscription Use the same subscription as the virtual machine.

Resource Group Use the same resource group as the virtual machine.

Name Use a text name for the load balancer, for example sqlLB.

Region Use the same region as the virtual machine.

SKU Standard

Type Internal

The Azure portal blade should look like this:

6. Select Next: Frontend IP Configuration

7. Select Add a frontend IP Configuration


8. Set up the frontend IP using the following values:

Name: A name that identifies the frontend IP configuration


Virtual network: The same network as the virtual machines.
Subnet: The subnet as the virtual machines.
IP address assignment: Static.
IP address: Use an available address from subnet. Use this address for your
availability group listener. Notice this is different from your cluster IP
address.
Availability zone: Optionally choose and availability zone to deploy your IP
to.

The following image shows the Add frontend IP Configuration UI:


9. Select Add to create the frontend IP.

10. Choose Review + Create to validate the configuration, and then Create to create
the load balancer and the frontend IP.

Azure creates the load balancer. The load balancer belongs to a specific network,
subnet, resource group, and location. After Azure completes the task, verify the load
balancer settings in Azure.

To configure the load balancer, you need to create a backend pool, a probe, and set the
load balancing rules. Do these in the Azure portal.
Step 2: Configure the backend pool
Azure calls the back-end address pool backend pool. In this case, the backend pool is the
addresses of the two SQL Server instances in your availability group.

1. In the Azure portal, go to your availability group. You might need to refresh the
view to see the newly created load balancer.

2. Select the load balancer, select Backend pools, and select +Add.

3. Provide a Name for the Backend pool.

4. Select NIC for Backend Pool Configuration.

5. Select Add to associate the backend pool with the availability set that contains the
VMs.

6. Under Virtual machine choose the SQL Server virtual machines that will host
availability group replicas.

7 Note

If both virtual machines are not specified, connections will only succeed to the
primary replica.

7. Select Add to add the virtual machines to the backend pool.

8. Select Save to create the backend pool.

Azure updates the settings for the back-end address pool. Now your availability set has
a pool of two SQL Server instances.

Step 3: Create a probe


The probe defines how Azure verifies which of the SQL Server instances currently owns
the availability group listener. Azure probes the service based on the IP address on a
port that you define when you create the probe.

1. Select the load balancer, choose Health probes, and then select +Add.

2. Set the listener health probe as follows:

Setting Description Example

Name Text SQLAlwaysOnEndPointProbe

Protocol Choose TCP TCP

Port Any unused port 59999

Interval The amount of time between probe attempts in 5


seconds

3. Select Add to set the health probe.

7 Note

Make sure that the port you specify is open on the firewall of both SQL Server
instances. Both instances require an inbound rule for the TCP port that you use. For
more information, see Add or Edit Firewall Rule.

Azure creates the probe and then uses it to test which SQL Server instance has the
listener for the availability group.

Step 4: Set the load-balancing rules


The load-balancing rules configure how the load balancer routes traffic to the SQL
Server instances. For this load balancer, you enable direct server return because only
one of the two SQL Server instances owns the availability group listener resource at a
time.

1. Select the load balancer, choose Load balancing rules, and select +Add.

2. Set the listener load balancing rules as follows.

Setting Description Example

Name Text SQLAlwaysOnEndPointListener


Setting Description Example

Frontend IP Choose an address Use the address that you created when
address you created the load balancer.

Backend pool Choose the backend pool Select the backend pool containing the
virtual machines targeted for the load
balancer.

Protocol Choose TCP TCP

Port Use the port for the 1433


availability group listener

Backend Port This field isn't used when 1433


Floating IP is set for direct
server return

Health Probe The name you specified for SQLAlwaysOnEndPointProbe


the probe

Session Drop down list None


Persistence

Idle Timeout Minutes to keep a TCP 4


connection open

Floating IP A flow topology and an IP Enabled


(direct server address mapping scheme
return)

2 Warning

Direct server return is set during creation. It cannot be changed.

7 Note

You might have to scroll down the blade to view all the settings.

3. Select Save to set the listener load balancing rules.

Azure configures the load-balancing rule. Now the load balancer is configured to route
traffic to the SQL Server instance that hosts the listener for the availability group.

At this point, the resource group has a load balancer that connects to both SQL Server
machines. The load balancer also contains an IP address for the SQL Server Always On
availability group listener, so that either machine can respond to requests for the
availability groups.

7 Note

If your SQL Server instances are in two separate regions, repeat the steps in the
other region. Each region requires a load balancer.

Add the cluster core IP address for the Windows Server


Failover Cluster (WSFC)
The WSFC IP address also needs to be on the load balancer. If you're using Windows
Server 2019, skip this process as the cluster creates a Distributed Server Name instead
of the Cluster Network Name.

1. In the Azure portal, go to the same Azure load balancer. Select Frontend IP
configuration and select +Add. Use the IP Address you configured for the WSFC in
the cluster core resources. Set the IP address as static.

2. On the load balancer, select Health probes, and then select +Add.

3. Set the WSFC cluster core IP address health probe as follows:

Setting Description Example

Name Text WSFCEndPointProbe

Protocol Choose TCP TCP

Port Any unused port 58888

Interval The amount of time between probe attempts in seconds 5

4. Select Add to set the health probe.

5. Set the load balancing rules. Select Load balancing rules, and select +Add.

6. Set the cluster core IP address load balancing rules as follows.

Setting Description Example

Name Text WSFCEndPoint


Setting Description Example

Frontend Choose an address Use the address that you created when
IP address you configured the WSFC IP address.
This is different from the listener IP
address

Backend Choose the backend pool Select the backend pool containing the
pool virtual machines targeted for the load
balancer.

Protocol Choose TCP TCP

Port Use the port for the cluster IP 58888


address. This is an available port
that isn't used for the listener
probe port.

Backend This field isn't used when Floating 58888


Port IP is set for direct server return

Probe The name you specified for the WSFCEndPointProbe


probe

Session Drop down list None


Persistence

Idle Minutes to keep a TCP connection 4


Timeout open

Floating IP A flow topology and an IP address Enabled


(direct mapping scheme
server
return)

2 Warning

Direct server return is set during creation. It cannot be changed.

7. Select OK to set the load balancing rules.

Configure the cluster to use the load balancer


IP address
The next step is to configure the listener on the cluster, and bring the listener online. Do
the following steps:
1. Create the availability group listener on the failover cluster.

2. Bring the listener online.

Step 5: Create the availability group listener on the


failover cluster
In this step, you manually create the availability group listener in Failover Cluster
Manager and SQL Server Management Studio.

The availability group listener is an IP address and network name that the SQL Server
availability group listens on. To create the availability group listener:

1. Get the name of the cluster network resource:

a. Use RDP to connect to the Azure virtual machine that hosts the primary replica.

b. Open Failover Cluster Manager.

c. Select the Networks node, and note the cluster network name. Use this name in
the $ClusterNetworkName variable in the PowerShell script. In the following image,
the cluster network name is Cluster Network 1:

2. Add the client access point. The client access point is the network name that
applications use to connect to the databases in an availability group.

a. In Failover Cluster Manager, expand the cluster name, and then select Roles.

b. On the Roles pane, right-click the availability group name, and then select Add
Resource > Client Access Point.
c. In the Name box, create a name for this new listener.
The name for the new
listener is the network name that applications use to connect to databases in the
SQL Server availability group.

d. To finish creating the listener, select Next twice, and then select Finish. Don't
bring the listener or resource online at this point.

3. Take the cluster role for the availability group offline. In Failover Cluster Manager,
under Roles, right-click the role, and then select Stop Role.

4. Configure the IP resource for the availability group:

a. Select the Resources tab, and then expand the client access point that you
created. The client access point is offline.
b. Right-click the IP resource, and then select Properties. Note the name of the IP
address, and use it in the $IPResourceName variable in the PowerShell script.

c. Under IP Address, select Static IP Address. Set the IP address as the same
address that you used when you set the load balancer address on the Azure portal.
5. Make the SQL Server availability group dependent on the client access point:

a. In Failover Cluster Manager, select Roles, and then select your availability group.

b. On the Resources tab, under Other Resources, right-click the availability group
resource, and then select Properties.

c. On the Dependencies tab, add the name of the client access point (the listener).
d. Select OK.

6. Make the client access point dependent on the IP address:

a. In Failover Cluster Manager, select Roles, and then select your availability group.

b. On the Resources tab, right-click the client access point under Server Name,
and then select Properties.
c. Select the Dependencies tab. Verify that the IP address is a dependency. If it
isn't, set a dependency on the IP address. If multiple resources are listed, verify that
the IP addresses have OR, not AND, dependencies. Then select OK.
 Tip

You can validate that the dependencies are correctly configured. In Failover
Cluster Manager, go to Roles, right-click the availability group, select More
Actions, and then select Show Dependency Report. When the dependencies
are correctly configured, the availability group is dependent on the network
name, and the network name is dependent on the IP address.

7. Set the cluster parameters in PowerShell:

a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.

$ClusterNetworkName find the name in the Failover Cluster Manager by

selecting Networks, right-click the network and select Properties. The


$ClusterNetworkName is under Name on the General tab.
$IPResourceName is the name given to the IP Address resource in the Failover

Cluster Manager. This is found in the Failover Cluster Manager by selecting


Roles, select the SQL Server AG or FCI name, select the Resources tab under
Server Name, right-click the IP address resource and select Properties. The
correct value is under Name on the General tab.

$ListenerILBIP is the IP address that you created on the Azure load balancer

for the availability group listener. Find the $ListenerILBIP in the Failover
Cluster Manager on the same properties page as the SQL Server AG/FCI
Listener Resource Name.

$ListenerProbePort is the port that you configured on the Azure load

balancer for the availability group listener, such as 59999. Any unused TCP
port is valid.

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster network


name. Use Get-ClusterNetwork on Windows Server 2012 or later to find
the name.

$IPResourceName = "<IPResourceName>" # The IP address resource name.

$ListenerILBIP = "<n.n.n.n>" # The IP address of the internal load


balancer. This is the static IP address for the load balancer that you
configured in the Azure portal.

[int]$ListenerProbePort = <nnnnn>

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask
"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.

7 Note

If your SQL Server instances are in separate regions, you need to run the
PowerShell script twice. The first time, use the $ListenerILBIP and
$ListenerProbePort values from the first region. The second time, use the
$ListenerILBIP and $ListenerProbePort values from the second region. The

cluster network name and the cluster IP resource name are also different for
each region.
8. Bring the cluster role for the availability group online. In Failover Cluster Manager,
under Roles, right-click the role, and then select Start Role.

If necessary, repeat the preceding steps to set the cluster parameters for the IP address
of the Windows Server failover cluster:

1. Get the IP address name of the Windows Server failover cluster. In Failover Cluster
Manager, under Cluster Core Resources, locate Server Name.

2. Right-click IP Address, and then select Properties.

3. Copy the name of the IP address from Name. It might be Cluster IP Address.

4. Set the cluster parameters in PowerShell:

a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.

$ClusterCoreIP is the IP address that you created on the Azure load balancer
for the Windows Server failover cluster's core cluster resource. It's different
from the IP address for the availability group listener.

$ClusterProbePort is the port that you configured on the Azure load balancer

for the Windows Server failover cluster's health probe. It's different from the
probe for the availability group listener.

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster network


name. Use Get-ClusterNetwork on Windows Server 2012 or later to find
the name.

$IPResourceName = "<ClusterIPResourceName>" # The IP address resource


name.

$ClusterCoreIP = "<n.n.n.n>" # The IP address of the cluster IP


resource. This is the static IP address for the load balancer that you
configured in the Azure portal.

[int]$ClusterProbePort = <nnnnn> # The probe port from


WSFCEndPointprobe in the Azure portal. This port must be different from
the probe port for the availability group listener.

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ClusterCoreIP";"ProbePort"=$ClusterProbePort;"SubnetMask"
="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
If any SQL resource is configured to use a port between 49152 and 65536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Such resources might
include:

SQL Server database engine


Always On availability group listener
Health probe for the failover cluster instance
Database mirroring endpoint
Cluster core IP resource

Adding an exclusion will prevent other system processes from being dynamically
assigned to the same port. For this scenario, configure the following exclusions on all
cluster nodes:

netsh int ipv4 add excludedportrange tcp startport=58888 numberofports=1


store=persistent

netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1


store=persistent

It's important to configure the port exclusion when the port is not in use. Otherwise, the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions are configured correctly,
use the following command: netsh int ipv4 show excludedportrange tcp .

2 Warning

The port for the availability group listener's health probe has to be different from
the port for the cluster core IP address's health probe. In these examples, the
listener port is 59999 and the cluster core IP address's health probe port is 58888.
Both ports require an "allow inbound" firewall rule.

Verify the configuration of the listener


If the cluster resources and dependencies are correctly configured, you should be able
to view the listener in SQL Server Management Studio. To set the listener port, do the
following steps:

1. Start SQL Server Management Studio, and then connect to the primary replica.

2. Go to Always On High Availability > Availability Groups > Availability Group


Listeners.
You should now see the listener name that you created in Failover Cluster
Manager.

3. Right-click the listener name, and then select Properties.

4. In the Port box, specify the port number for the availability group listener by using
the $EndpointPort you used earlier (1433 was the default), and then select OK.

You now have an availability group in Azure virtual machines running in Resource
Manager mode.

Test the connection to the listener


Test the connection by doing the following steps:

1. Use remote desktop protocol (RDP) to connect to a SQL Server instance that's in
the same virtual network, but doesn't own the replica. This server can be the other
SQL Server instance in the cluster.

2. Use sqlcmd utility to test the connection. For example, the following script
establishes a sqlcmd connection to the primary replica through the listener with
Windows authentication:

Console

sqlcmd -S <listenerName> -E

The SQLCMD connection automatically connects to the SQL Server instance that hosts
the primary replica.

Create an IP address for an additional


availability group
Each availability group uses a separate listener. Each listener has its own IP address. Use
the same load balancer to hold the IP address for additional listeners. Add only the
primary IP address of the VM to the back-end pool of the load balancer as the
secondary VM IP address doesn't support floating IP.

To add an IP address to a load balancer with the Azure portal, do the following steps:

1. In the Azure portal, open the resource group that contains the load balancer, and
then select the load balancer.
2. Under Settings, select Frontend IP configuration, and then select + Add.

3. Under Add frontend IP address, assign a name for the front end.

4. Verify that the Virtual network and the Subnet are the same as the SQL Server
instances.

5. Set the IP address for the listener.

 Tip

You can set the IP address to static and type an address that is not currently
used in the subnet. Alternatively, you can set the IP address to dynamic and
save the new front-end IP pool. When you do so, the Azure portal
automatically assigns an available IP address to the pool. You can then reopen
the front-end IP pool and change the assignment to static.

6. Save the IP address for the listener by selecting Add.

7. Add a health probe selecting Health probes under Settings and use the following
settings:

Setting Value

Name A name to identify the probe.

Protocol TCP

Port An unused TCP port, which must be available on all virtual machines. It can't be
used for any other purpose. No two listeners can use the same probe port.

Interval The amount of time between probe attempts. Use the default (5).

8. Select Add to save the probe.

9. Create a load-balancing rule. Under Settings, select Load balancing rules, and
then select + Add.

10. Configure the new load-balancing rule by using the following settings:

Setting Value

Name A name to identify the load-balancing rule.

Frontend IP Select the IP address you created.


address
Setting Value

Backend pool The pool that contains the virtual machines with the SQL Server
instances.

Protocol TCP

Port Use the port that the SQL Server instances are using. A default
instance uses port 1433, unless you changed it.

Backend port Use the same value as Port.

Health probe Choose the probe you created.

Session persistence None

Idle timeout Default (4)


(minutes)

Floating IP (direct Enabled


server return)

Configure the availability group to use the new IP address


To finish configuring the cluster, repeat the steps that you followed when you made the
first availability group. That is, configure the cluster to use the new IP address.

After you've added an IP address for the listener, configure the additional availability
group by doing the following steps:

1. Verify that the probe port for the new IP address is open on both SQL Server
virtual machines.

2. In Cluster Manager, add the client access point.

3. Configure the IP resource for the availability group.

) Important

When you create the IP address, use the IP address that you added to the
load balancer.

4. Make the SQL Server availability group resource dependent on the client access
point.

5. Make the client access point resource dependent on the IP address.


6. Set the cluster parameters in PowerShell.

If you're on the secondary replica VM, and you're unable to connect to the listener, it's
possible the probe port was not configured correctly.

You can use the following script to validate the probe port is correctly configured for the
availability group:

PowerShell

Clear-Host

Get-ClusterResource `

| Where-Object {$_.ResourceType.Name -like "IP Address"} `

| Get-ClusterParameter `

| Where-Object {($_.Name -like "Network") -or ($_.Name -like "Address") -or


($_.Name -like "ProbePort") -or ($_.Name -like "SubnetMask")}

Add load-balancing rule for distributed


availability group
If an availability group participates in a distributed availability group, the load balancer
needs an additional rule. This rule stores the port used by the distributed availability
group listener.

) Important

This step only applies if the availability group participates in a distributed


availability group.

1. On each server that participates in the distributed availability group, create an


inbound rule on the distributed availability group listener TCP port. In many
examples, documentation uses 5022.

2. In the Azure portal, select the load balancer and select Load balancing rules, and
then select +Add.

3. Create the load balancing rule with the following settings:

Setting Value

Name A name to identify the load balancing rule for the distributed
availability group.
Setting Value

Frontend IP address Use the same frontend IP address as the availability group.

Backend pool The pool that contains the virtual machines with the SQL Server
instances.

Protocol TCP

Port 5022 - The port for the distributed availability group endpoint
listener.

Can be any available port.

Backend port 5022 - Use the same value as Port.

Health probe Choose the probe you created.

Session persistence None

Idle timeout (minutes) Default (4)

Floating IP (direct server Enabled


return)

Repeat these steps for the load balancer on the other availability groups that participate
in the distributed availability groups.

If you have an Azure Network Security Group to restrict access, make sure that the allow
rules include:

The backend SQL Server VM IP addresses


The load balancer floating IP addresses for the AG listener
The cluster core IP address, if applicable.

Next steps
To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
HADR settings for SQL Server on Azure VMs
Configure one or more Always On
availability group listeners
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This document shows you how to use PowerShell to do one of the following tasks:

create a load balancer


add IP addresses to an existing load balancer for SQL Server availability groups.

An availability group listener is a virtual network name that clients connect to for
database access. On Azure Virtual Machines in a single subnet, a load balancer holds the
IP address for the listener. The load balancer routes traffic to the instance of SQL Server
that is listening on the probe port. Usually, an availability group uses an internal load
balancer. An Azure internal load balancer can host one or many IP addresses. Each IP
address uses a specific probe port.

The ability to assign multiple IP addresses to an internal load balancer is new to Azure
and is only available in the Resource Manager model. To complete this task, you need to
have a SQL Server availability group deployed on Azure Virtual Machines in the
Resource Manager model. Both SQL Server virtual machines must belong to the same
availability set. You can use the Microsoft template to automatically create the
availability group in Azure Resource Manager. This template automatically creates the
availability group, including the internal load balancer for you. If you prefer, you can
manually configure an Always On availability group.

To complete the steps in this article, your availability groups need to be already
configured.

Related topics include:


Configure Always On Availability Groups in Azure VM (GUI)
Configure a VNet-to-VNet connection by using Azure Resource Manager and
PowerShell

7 Note

This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.

Start your PowerShell session


Run the Connect-Az Account cmdlet and you will be presented with a sign-in screen to
enter your credentials. Use the same credentials that you use to sign in to the Azure
portal.

PowerShell

Connect-AzAccount

If you have multiple subscriptions use the Set-AzContext cmdlet to select which
subscription your PowerShell session should use. To see what subscription the current
PowerShell session is using, run Get-AzContext. To see all your subscriptions, run Get-
AzSubscription.

PowerShell

Set-AzContext -SubscriptionId '4cac86b0-1e56-bbbb-aaaa-000000000000'

Verify PowerShell version


The examples in this article are tested using Azure PowerShell module version 5.4.1.

Verify that your PowerShell module is 5.4.1 or later.

See Install the Azure PowerShell module.

Configure the Windows Firewall


Configure the Windows Firewall to allow SQL Server access. The firewall rules allow TCP
connections to the ports use by the SQL Server instance, and the listener probe. For
detailed instructions, see Configure a Windows Firewall for Database Engine Access.
Create an inbound rule for the SQL Server port and for the probe port.

If you are restricting access with an Azure Network Security Group, ensure that the allow
rules include the backend SQL Server VM IP addresses, and the load balancer floating IP
addresses for the AG listener and the cluster core IP address, if applicable.

Determine the load balancer SKU required


Azure load balancer is available in two SKUs: Basic & Standard. The standard load
balancer is recommended as the Basic SKU is scheduled to be retired on September 30,
2025 . The standard load balancer is required for virtual machines in an availability
zone. Standard load balancer requires that all VM IP addresses use standard IP
addresses.

The current Microsoft template for an availability group uses a basic load balancer with
basic IP addresses.

7 Note

You will need to configure a service endpoint if you use a standard load balancer
and Azure Storage for the cloud witness.

The examples in this article specify a standard load balancer. In the examples, the script
includes -sku Standard .

PowerShell

$ILB= New-AzLoadBalancer -Location $Location -Name $ILBName -


ResourceGroupName $ResourceGroupName -FrontendIpConfiguration $FEConfig -
BackendAddressPool $BEConfig -LoadBalancingRule $ILBRule -Probe
$SQLHealthProbe -sku Standard

To create a basic load balancer, remove -sku Standard from the line that creates the
load balancer. For example:

PowerShell

$ILB= New-AzLoadBalancer -Location $Location -Name $ILBName -


ResourceGroupName $ResourceGroupName -FrontendIpConfiguration $FEConfig -
BackendAddressPool $BEConfig -LoadBalancingRule $ILBRule -Probe
$SQLHealthProbe

Example Script: Create an internal load


balancer with PowerShell

7 Note

If you created your availability group with the Microsoft template, the internal load
balancer was already created.

The following PowerShell script creates an internal load balancer, configures the load-
balancing rules, and sets an IP address for the load balancer. To run the script, open
Windows PowerShell ISE, and then paste the script in the Script pane. Use Connect-
AzAccount to log in to PowerShell. If you have multiple Azure subscriptions, use Select-
AzSubscription to set the subscription.

PowerShell

# Connect-AzAccount

# Select-AzSubscription -SubscriptionId <xxxxxxxxxxx-xxxx-xxxx-xxxx-


xxxxxxxxxxxx>

$ResourceGroupName = "<Resource Group Name>" # Resource group name

$VNetName = "<Virtual Network Name>" # Virtual network name

$SubnetName = "<Subnet Name>" # Subnet name

$ILBName = "<Load Balancer Name>" # ILB name

$Location = "<Azure Region>" # Azure location

$VMNames = "<VM1>","<VM2>" # Virtual machine names

$ILBIP = "<n.n.n.n>" # IP address

[int]$ListenerPort = "<nnnn>" # AG listener port

[int]$ProbePort = "<nnnn>" # Probe port

$LBProbeName ="ILBPROBE_$ListenerPort" # The Load balancer Probe


Object Name

$LBConfigRuleName = "ILBCR_$ListenerPort" # The Load Balancer Rule Object


Name

$FrontEndConfigurationName = "FE_SQLAGILB_1" # Object name for the front-end


configuration

$BackEndConfigurationName ="BE_SQLAGILB_1" # Object name for the back-end


configuration

$VNet = Get-AzVirtualNetwork -Name $VNetName -ResourceGroupName


$ResourceGroupName

$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $VNet -Name


$SubnetName

$FEConfig = New-AzLoadBalancerFrontendIpConfig -Name


$FrontEndConfigurationName -PrivateIpAddress $ILBIP -SubnetId $Subnet.id

$BEConfig = New-AzLoadBalancerBackendAddressPoolConfig -Name


$BackEndConfigurationName

$SQLHealthProbe = New-AzLoadBalancerProbeConfig -Name $LBProbeName -Protocol


tcp -Port $ProbePort -IntervalInSeconds 15 -ProbeCount 2

$ILBRule = New-AzLoadBalancerRuleConfig -Name $LBConfigRuleName -


FrontendIpConfiguration $FEConfig -BackendAddressPool $BEConfig -Probe
$SQLHealthProbe -Protocol tcp -FrontendPort $ListenerPort -BackendPort
$ListenerPort -LoadDistribution Default -EnableFloatingIP

$ILB= New-AzLoadBalancer -Location $Location -Name $ILBName -


ResourceGroupName $ResourceGroupName -FrontendIpConfiguration $FEConfig -
BackendAddressPool $BEConfig -LoadBalancingRule $ILBRule -Probe
$SQLHealthProbe

$bepool = Get-AzLoadBalancerBackendAddressPoolConfig -Name


$BackEndConfigurationName -LoadBalancer $ILB

foreach($VMName in $VMNames)

$VM = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $VMName

$NICName = ($vm.NetworkProfile.NetworkInterfaces.Id.split('/') |
select -last 1)

$NIC = Get-AzNetworkInterface -name $NICName -ResourceGroupName


$ResourceGroupName

$NIC.IpConfigurations[0].LoadBalancerBackendAddressPools = $BEPool

Set-AzNetworkInterface -NetworkInterface $NIC

start-AzVM -ResourceGroupName $ResourceGroupName -Name $VM.Name

Example script: Add an IP address to an


existing load balancer with PowerShell
To use more than one availability group, add an additional IP address to the load
balancer. Each IP address requires its own load-balancing rule, probe port, and front
port.
Add only the primary IP address of the VM to the back-end pool of the load
balancer as the secondary VM IP address does not support floating IP.

The front-end port is the port that applications use to connect to the SQL Server
instance. IP addresses for different availability groups can use the same front-end port.
7 Note

For SQL Server availability groups, each IP address requires a specific probe port.
For example, if one IP address on a load balancer uses probe port 59999, no other
IP addresses on that load balancer can use probe port 59999.

For information about load balancer limits, see Private front end IP per load
balancer under Networking Limits - Azure Resource Manager.
For information about availability group limits, see Restrictions (Availability
Groups).

The following script adds a new IP address to an existing load balancer. The ILB uses the
listener port for the load-balancing front-end port. This port can be the port that SQL
Server is listening on. For default instances of SQL Server, the port is 1433. The load-
balancing rule for an availability group requires a floating IP (direct server return) so the
back-end port is the same as the front-end port. Update the variables for your
environment.

PowerShell

# Connect-AzAccount

# Select-AzSubscription -SubscriptionId <xxxxxxxxxxx-xxxx-xxxx-xxxx-


xxxxxxxxxxxx>

$ResourceGroupName = "<ResourceGroup>" # Resource group name

$VNetName = "<VirtualNetwork>" # Virtual network name

$SubnetName = "<Subnet>" # Subnet name

$ILBName = "<ILBName>" # ILB name

$ILBIP = "<n.n.n.n>" # IP address

[int]$ListenerPort = "<nnnn>" # AG listener port

[int]$ProbePort = "<nnnnn>" # Probe port

$ILB = Get-AzLoadBalancer -Name $ILBName -ResourceGroupName


$ResourceGroupName

$count = $ILB.FrontendIpConfigurations.Count+1

$FrontEndConfigurationName ="FE_SQLAGILB_$count"

$LBProbeName = "ILBPROBE_$count"

$LBConfigrulename = "ILBCR_$count"

$VNet = Get-AzVirtualNetwork -Name $VNetName -ResourceGroupName


$ResourceGroupName

$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $VNet -Name


$SubnetName

$ILB | Add-AzLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName -


PrivateIpAddress $ILBIP -SubnetId $Subnet.Id

$ILB | Add-AzLoadBalancerProbeConfig -Name $LBProbeName -Protocol Tcp -Port


$Probeport -ProbeCount 2 -IntervalInSeconds 15 | Set-AzLoadBalancer

$ILB = Get-AzLoadBalancer -Name $ILBname -ResourceGroupName


$ResourceGroupName

$FEConfig = get-AzLoadBalancerFrontendIpConfig -Name


$FrontEndConfigurationName -LoadBalancer $ILB

$SQLHealthProbe = Get-AzLoadBalancerProbeConfig -Name $LBProbeName -


LoadBalancer $ILB

$BEConfig = Get-AzLoadBalancerBackendAddressPoolConfig -Name


$ILB.BackendAddressPools[0].Name -LoadBalancer $ILB

$ILB | Add-AzLoadBalancerRuleConfig -Name $LBConfigRuleName -


FrontendIpConfiguration $FEConfig -BackendAddressPool $BEConfig -Probe
$SQLHealthProbe -Protocol tcp -FrontendPort $ListenerPort -BackendPort
$ListenerPort -LoadDistribution Default -EnableFloatingIP | Set-
AzLoadBalancer

Configure the listener


The availability group listener is an IP address and network name that the SQL Server
availability group listens on. To create the availability group listener:

1. Get the name of the cluster network resource:

a. Use RDP to connect to the Azure virtual machine that hosts the primary replica.

b. Open Failover Cluster Manager.

c. Select the Networks node, and note the cluster network name. Use this name in
the $ClusterNetworkName variable in the PowerShell script. In the following image,
the cluster network name is Cluster Network 1:
2. Add the client access point. The client access point is the network name that
applications use to connect to the databases in an availability group.

a. In Failover Cluster Manager, expand the cluster name, and then select Roles.

b. On the Roles pane, right-click the availability group name, and then select Add
Resource > Client Access Point.

c. In the Name box, create a name for this new listener.


The name for the new
listener is the network name that applications use to connect to databases in the
SQL Server availability group.

d. To finish creating the listener, select Next twice, and then select Finish. Don't
bring the listener or resource online at this point.
3. Take the cluster role for the availability group offline. In Failover Cluster Manager,
under Roles, right-click the role, and then select Stop Role.

4. Configure the IP resource for the availability group:

a. Select the Resources tab, and then expand the client access point that you
created. The client access point is offline.

b. Right-click the IP resource, and then select Properties. Note the name of the IP
address, and use it in the $IPResourceName variable in the PowerShell script.

c. Under IP Address, select Static IP Address. Set the IP address as the same
address that you used when you set the load balancer address on the Azure portal.
5. Make the SQL Server availability group dependent on the client access point:

a. In Failover Cluster Manager, select Roles, and then select your availability group.

b. On the Resources tab, under Other Resources, right-click the availability group
resource, and then select Properties.

c. On the Dependencies tab, add the name of the client access point (the listener).
d. Select OK.

6. Make the client access point dependent on the IP address:

a. In Failover Cluster Manager, select Roles, and then select your availability group.

b. On the Resources tab, right-click the client access point under Server Name,
and then select Properties.
c. Select the Dependencies tab. Verify that the IP address is a dependency. If it
isn't, set a dependency on the IP address. If multiple resources are listed, verify that
the IP addresses have OR, not AND, dependencies. Then select OK.
 Tip

You can validate that the dependencies are correctly configured. In Failover
Cluster Manager, go to Roles, right-click the availability group, select More
Actions, and then select Show Dependency Report. When the dependencies
are correctly configured, the availability group is dependent on the network
name, and the network name is dependent on the IP address.

7. Set the cluster parameters in PowerShell:

a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.

$ClusterNetworkName find the name in the Failover Cluster Manager by

selecting Networks, right-click the network and select Properties. The


$ClusterNetworkName is under Name on the General tab.
$IPResourceName is the name given to the IP Address resource in the Failover

Cluster Manager. This is found in the Failover Cluster Manager by selecting


Roles, select the SQL Server AG or FCI name, select the Resources tab under
Server Name, right-click the IP address resource and select Properties. The
correct value is under Name on the General tab.

$ListenerILBIP is the IP address that you created on the Azure load balancer

for the availability group listener. Find the $ListenerILBIP in the Failover
Cluster Manager on the same properties page as the SQL Server AG/FCI
Listener Resource Name.

$ListenerProbePort is the port that you configured on the Azure load

balancer for the availability group listener, such as 59999. Any unused TCP
port is valid.

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster network


name. Use Get-ClusterNetwork on Windows Server 2012 or later to find
the name.

$IPResourceName = "<IPResourceName>" # The IP address resource name.

$ListenerILBIP = "<n.n.n.n>" # The IP address of the internal load


balancer. This is the static IP address for the load balancer that you
configured in the Azure portal.

[int]$ListenerProbePort = <nnnnn>

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask
"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.

7 Note

If your SQL Server instances are in separate regions, you need to run the
PowerShell script twice. The first time, use the $ListenerILBIP and
$ListenerProbePort values from the first region. The second time, use the
$ListenerILBIP and $ListenerProbePort values from the second region. The

cluster network name and the cluster IP resource name are also different for
each region.
8. Bring the cluster role for the availability group online. In Failover Cluster Manager,
under Roles, right-click the role, and then select Start Role.

If necessary, repeat the preceding steps to set the cluster parameters for the IP address
of the Windows Server failover cluster:

1. Get the IP address name of the Windows Server failover cluster. In Failover Cluster
Manager, under Cluster Core Resources, locate Server Name.

2. Right-click IP Address, and then select Properties.

3. Copy the name of the IP address from Name. It might be Cluster IP Address.

4. Set the cluster parameters in PowerShell:

a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.

$ClusterCoreIP is the IP address that you created on the Azure load balancer
for the Windows Server failover cluster's core cluster resource. It's different
from the IP address for the availability group listener.

$ClusterProbePort is the port that you configured on the Azure load balancer

for the Windows Server failover cluster's health probe. It's different from the
probe for the availability group listener.

PowerShell

$ClusterNetworkName = "<MyClusterNetworkName>" # The cluster network


name. Use Get-ClusterNetwork on Windows Server 2012 or later to find
the name.

$IPResourceName = "<ClusterIPResourceName>" # The IP address resource


name.

$ClusterCoreIP = "<n.n.n.n>" # The IP address of the cluster IP


resource. This is the static IP address for the load balancer that you
configured in the Azure portal.

[int]$ClusterProbePort = <nnnnn> # The probe port from


WSFCEndPointprobe in the Azure portal. This port must be different from
the probe port for the availability group listener.

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ClusterCoreIP";"ProbePort"=$ClusterProbePort;"SubnetMask"
="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
If any SQL resource is configured to use a port between 49152 and 65536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Such resources might
include:

SQL Server database engine


Always On availability group listener
Health probe for the failover cluster instance
Database mirroring endpoint
Cluster core IP resource

Adding an exclusion will prevent other system processes from being dynamically
assigned to the same port. For this scenario, configure the following exclusions on all
cluster nodes:

netsh int ipv4 add excludedportrange tcp startport=58888 numberofports=1


store=persistent

netsh int ipv4 add excludedportrange tcp startport=59999 numberofports=1


store=persistent

It's important to configure the port exclusion when the port is not in use. Otherwise, the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions are configured correctly,
use the following command: netsh int ipv4 show excludedportrange tcp .

2 Warning

The port for the availability group listener's health probe has to be different from
the port for the cluster core IP address's health probe. In these examples, the
listener port is 59999 and the cluster core IP address's health probe port is 58888.
Both ports require an "allow inbound" firewall rule.

Set the listener port in SQL Server


Management Studio
1. Launch SQL Server Management Studio and connect to the primary replica.

2. Navigate to Always On High Availability > Availability Groups > Availability


Group Listeners.

3. You should now see the listener name that you created in Failover Cluster
Manager. Right-click the listener name and select Properties.
4. In the Port box, specify the port number for the availability group listener by using
the $EndpointPort you used earlier (1433 was the default), then select OK.

Test the connection to the listener


To test the connection:

1. Use Remote Desktop Protocol (RDP) to connect to a SQL Server that is in the same
virtual network, but does not own the replica. It might be the other SQL Server in
the cluster.

2. Use sqlcmd utility to test the connection. For example, the following script
establishes a sqlcmd connection to the primary replica through the listener with
Windows authentication:

sqlcmd -S <listenerName> -E

If the listener is using a port other than the default port (1433), specify the port in
the connection string. For example, the following sqlcmd command connects to a
listener at port 1435:

sqlcmd -S <listenerName>,1435 -E

The SQLCMD connection automatically connects to whichever instance of SQL Server


hosts the primary replica.

7 Note

Make sure that the port you specify is open on the firewall of both SQL Servers.
Both servers require an inbound rule for the TCP port that you use. For more
information, see Add or Edit Firewall Rule.

If you're on the secondary replica VM, and you're unable to connect to the listener, it's
possible the probe port was not configured correctly.

You can use the following script to validate the probe port is correctly configured for the
availability group:
PowerShell

Clear-Host

Get-ClusterResource `

| Where-Object {$_.ResourceType.Name -like "IP Address"} `

| Get-ClusterParameter `

| Where-Object {($_.Name -like "Network") -or ($_.Name -like "Address") -or


($_.Name -like "ProbePort") -or ($_.Name -like "SubnetMask")}

Guidelines and limitations


Note the following guidelines on availability group listener in Azure using internal load
balancer:

With an internal load balancer, you only access the listener from within the same
virtual network.

If you're restricting access with an Azure Network Security Group, ensure that the
allow rules include:
The backend SQL Server VM IP addresses
The load balancer floating IP addresses for the AG listener
The cluster core IP address, if applicable.

Create a service endpoint when using a standard load balancer with Azure Storage
for the cloud witness. For more information, see Grant access from a virtual
network.

PowerShell cmdlets
Use the following PowerShell cmdlets to create an internal load balancer for Azure
Virtual Machines.

New-AzLoadBalancer creates a load balancer.


New-AzLoadBalancerFrontendIpConfig creates a front-end IP configuration for a
load balancer.
New-AzLoadBalancerRuleConfig creates a rule configuration for a load balancer.
New-AzLoadBalancerBackendAddressPoolConfig creates a backend address pool
configuration for a load balancer.
New-AzLoadBalancerProbeConfig creates a probe configuration for a load
balancer.
Remove-AzLoadBalancer removes a load balancer from an Azure resource group.
Next steps
To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
HADR settings for SQL Server on Azure VMs
Configure an Azure load balancer for an
AG VNN listener - SQL Server on Azure
VMs
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

On Azure virtual machines, clusters use a load balancer to hold an IP address that needs
to be on one cluster node at a time. In this solution, the load balancer holds the IP
address for the virtual network name (VNN) listener for the Always On availability group
when the SQL Server VMs are in a single subnet.

This article teaches you to configure a load balancer by using the Azure Load Balancer
service. The load balancer will route traffic to your availability group listener with SQL
Server on Azure VMs for high availability and disaster recovery (HADR).

For an alternative connectivity option for customers who are on SQL Server 2019 CU8
and later, consider a distributed network name (DNN) listener instead. A DNN listener
offers simplified configuration and improved failover.

Prerequisites
Before you complete the steps in this article, you should already have:

Decided that Azure Load Balancer is the appropriate connectivity option for your
availability group.
Installed the latest version of PowerShell.

Create a load balancer


You can create either of these types of load balancers:

Internal: An internal load balancer can be accessed only from private resources
that are internal to the network. When you configure an internal load balancer and
its rules, use the same IP address as the availability group listener for the frontend
IP address.

External: An external load balancer can route traffic from the public to internal
resources. When you configure an external load balancer, you can't use the same
IP address as the availability group listener because the listener IP address can't be
a public IP address.

To use an external load balancer, logically allocate an IP address in the same


subnet as the availability group that doesn't conflict with any other IP address. Use
this address as the frontend IP address for the load-balancing rules.

) Important

On September 30, 2025, the Basic SKU for Azure Load Balancer will be retired. For
more information, see the official announcement . If you're currently using Basic
Load Balancer, upgrade to Standard Load Balancer before the retirement date. For
guidance, review Upgrade Load Balancer.

To create the load balancer:

1. In the Azure portal , go to the resource group that contains the virtual machines.

2. Select Add. Search Azure Marketplace for load balancer. Select Load Balancer.

3. Select Create.

4. In Create load balancer, on the Basics tab, set up the load balancer by using the
following values:

Subscription: Your Azure subscription.


Resource group: The resource group that contains your virtual machines.
Name: A name that identifies the load balancer.
Region: The Azure location that contains your virtual machines.
SKU: Standard.
Type: Either Public or Internal. An internal load balancer can be accessed
from within the virtual network. Most Azure applications can use an internal
load balancer. If your application needs access to SQL Server directly over the
internet, use a public load balancer.
Tier: Regional.

5. Select Next: Frontend IP configuration.

6. Select Add a frontend IP configuration.

7. Set up the frontend IP address by using the following values:

Name: A name that identifies the frontend IP configuration.


Virtual network: The same network as the virtual machines.
Subnet: The same subnet as the virtual machines.
Assignment: Static.
IP address: The IP address that you assigned to the clustered network
resource.
Availability zone: An optional availability zone to deploy your IP address to.

8. Select Add to create the frontend IP address.

9. Select Review + Create to create the load balancer.


Configure a backend pool
1. Return to the Azure resource group that contains the virtual machines and locate
the new load balancer. You might need to refresh the view on the resource group.
Select the load balancer.

2. Select Backend pools, and then select +Add.

3. For Name, provide a name for the backend pool.

4. For Backend Pool Configuration, select NIC.

5. Select Add to associate the backend pool with the availability set that contains the
VMs.

6. Under Virtual machine, choose the virtual machines that will participate as cluster
nodes. Be sure to include all virtual machines that will host the availability group.

Add only the primary IP address of each VM. Don't add any secondary IP
addresses.

7. Select Add to add the virtual machines to the backend pool.

8. Select Save to create the backend pool.

Configure a health probe


1. On the pane for the load balancer, select Health probes.

2. On the Add health probe pane, set the following parameters:

Name: A name for the health probe.


Protocol: TCP.
Port: The port that you created in the firewall for the health probe. In this
article, the example uses TCP port 59999.
Interval: 5 Seconds.

3. Select Add.

Set load-balancing rules


1. On the pane for the load balancer, select Load-balancing rules.

2. Select Add.
3. Set these parameters:

Name: A name for the load-balancing rule.


Frontend IP address: The IP address that you set when you configured the
frontend.
Backend pool: The backend pool that contains the virtual machines targeted
for the load balancer.
HA Ports: Enables load balancing on all ports for TCP and UDP protocols.
Protocol: TCP.
Port: The SQL Server TCP port. The default is 1433.
Backend port: The same port as the Port value when you enable Floating IP
(direct server return).
Health probe: The health probe that you configured earlier.
Session persistence: None.
Idle timeout (minutes): 4.
Floating IP (direct server return): Enabled.

4. Select Save.

Configure a cluster probe


Set the cluster probe's port parameter in PowerShell.

Private load balancer

Update the variables in the following script with values from your environment.
Remove the angle brackets ( < and > ) from the script.

PowerShell

$ClusterNetworkName = "<Cluster Network Name>"

$IPResourceName = "<AG Listener IP Address Resource Name>"

$ILBIP = "<n.n.n.n>"

[int]$ProbePort = <nnnnn>

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ILBIP";"ProbePort"=$ProbePort;"SubnetMask"="255.255.255.25
5";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

The following table describes the values that you need to update:

Variable Value
Variable Value

ClusterNetworkName The name of the Windows Server failover cluster for the network. In
Failover Cluster Manager > Networks, right-click the network and
select Properties. The correct value is under Name on the General
tab.

IPResourceName The resource name for the IP address of the AG listener. In Failover
Cluster Manager > Roles, under the availability group role, under
Server Name, right-click the IP address resource and select
Properties. The correct value is under Name on the General tab.

ILBIP The IP address of the internal load balancer. This address is configured
in the Azure portal as the frontend address of the internal load
balancer. This is the same IP address as the availability group listener.
You can find it in Failover Cluster Manager, on the same properties
page where you located the value for IPResourceName .

ProbePort The probe port that you configured in the health probe of the load
balancer. Any unused TCP port is valid.

SubnetMask The subnet mask for the cluster parameter. It must be the TCP/IP
broadcast address: 255.255.255.255 .

After you set the cluster probe, you can see all the cluster parameters in PowerShell.
Run this script:

PowerShell

Get-ClusterResource $IPResourceName | Get-ClusterParameter

Modify the connection string


For clients that support it, add MultiSubnetFailover=True to the connection string.
Although the MultiSubnetFailover connection option isn't required, it provides the
benefit of a faster subnet failover. This is because the client driver tries to open a TCP
socket for each IP address in parallel. The client driver waits for the first IP address to
respond with success. After the successful response, the client driver uses that IP
address for the connection.

If your client doesn't support the MultiSubnetFailover parameter, you can modify the
RegisterAllProvidersIP and HostRecordTTL settings to prevent connectivity delays after
failover.
Use PowerShell to modify the RegisterAllProvidersIp and HostRecordTTL settings:

PowerShell

Get-ClusterResource yourListenerName | Set-ClusterParameter


RegisterAllProvidersIP 0

Get-ClusterResource yourListenerName|Set-ClusterParameter HostRecordTTL 300

To learn more, see the documentation about listener connection timeout in SQL Server.

 Tip

Set the MultiSubnetFailover parameter to true in the connection string, even


for HADR solutions that span a single subnet. This setting supports future
spanning of subnets without the need to update connection strings.
By default, clients cache cluster DNS records for 20 minutes. By reducing
HostRecordTTL , you reduce the time to live (TTL) for the cached record. Legacy

clients can then reconnect more quickly. As such, reducing the HostRecordTTL
setting might increase traffic to the DNS servers.

Test failover
Test failover of the clustered resource to validate cluster functionality:

1. Open SQL Server Management Studio and connect to your availability group
listener.
2. In Object Explorer, expand Always On Availability Group.
3. Right-click the availability group and select Failover.
4. Follow the wizard prompts to fail over the availability group to a secondary replica.

Failover succeeds when the replicas switch roles and are both synchronized.

Test connectivity
To test connectivity, sign in to another virtual machine in the same virtual network. Open
SQL Server Management Studio and connect to the availability group listener.

7 Note

If you need to, you can download SQL Server Management Studio.
Next steps
After the VNN is created, consider optimizing the cluster settings for SQL Server VMs.

To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Configure a DNN listener for an
availability group
Article • 04/18/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

With SQL Server on Azure VMs in a single subnet, the distributed network name (DNN)
routes traffic to the appropriate clustered resource. It provides an easier way to connect
to an Always On availability group (AG) than the virtual network name (VNN) listener,
without the need for an Azure Load Balancer.

This article teaches you to configure a DNN listener to replace the VNN listener and
route traffic to your availability group with SQL Server on Azure VMs for high availability
and disaster recovery (HADR).

For an alternative connectivity option, consider a VNN listener and Azure Load Balancer
instead.

Overview
A distributed network name (DNN) listener replaces the traditional virtual network name
(VNN) availability group listener when used with Always On availability groups on SQL
Server VMs. This negates the need for an Azure Load Balancer to route traffic,
simplifying deployment, maintenance, and improving failover.

Use the DNN listener to replace an existing VNN listener, or alternatively, use it in
conjunction with an existing VNN listener so that your availability group has two distinct
connection points - one using the VNN listener name (and port if non-default), and one
using the DNN listener name and port.
U Caution

The routing behavior when using a DNN differs when using a VNN. Do not use port
1433. To learn more, see the Port consideration section later in this article.

Prerequisites
Before you complete the steps in this article, you should already have:

SQL Server starting with either SQL Server 2019 CU8 and later, SQL Server 2017
CU25 and later, or SQL Server 2016 SP3 and later on Windows Server 2016
and later.
Decided that the distributed network name is the appropriate connectivity option
for your HADR solution.
Configured your Always On availability group.
Installed the latest version of PowerShell.
Identified the unique port that you will use for the DNN listener. The port used for
a DNN listener must be unique across all replicas of the availability group or
failover cluster instance. No other connection can share the same port.

Create script
Use PowerShell to create the distributed network name (DNN) resource and associate it
with your availability group.

To do so, follow these steps:

1. Open a text editor, such as Notepad.

2. Copy and paste the following script:

PowerShell

param (

[Parameter(Mandatory=$true)][string]$Ag,

[Parameter(Mandatory=$true)][string]$Dns,

[Parameter(Mandatory=$true)][string]$Port

Write-Host "Add a DNN listener for availability group $Ag with DNS name
$Dns and port $Port"

$ErrorActionPreference = "Stop"

# create the DNN resource with the port as the resource name

Add-ClusterResource -Name $Port -ResourceType "Distributed Network


Name" -Group $Ag

# set the DNS name of the DNN resource

Get-ClusterResource -Name $Port | Set-ClusterParameter -Name DnsName -


Value $Dns

# start the DNN resource

Start-ClusterResource -Name $Port

$Dep = Get-ClusterResourceDependency -Resource $Ag

if ( $Dep.DependencyExpression -match '\s*\((.*)\)\s*' )


{

$DepStr = "$($Matches.1) or [$Port]"

else

$DepStr = "[$Port]"

Write-Host "$DepStr"

# add the Dependency from availability group resource to the DNN


resource

Set-ClusterResourceDependency -Resource $Ag -Dependency "$DepStr"

#bounce the AG resource

Stop-ClusterResource -Name $Ag

Start-ClusterResource -Name $Ag

3. Save the script as a .ps1 file, such as add_dnn_listener.ps1 .

Execute script
To create the DNN listener, execute the script passing in parameters for the name of the
availability group, listener name, and port.

For example, assuming an availability group name of ag1 , listener name of dnnlsnr , and
listener port as 6789 , follow these steps:

1. Open a command-line interface tool, such as command prompt or PowerShell.

2. Navigate to where you saved the .ps1 script, such as c:\Documents.

3. Execute the script: add_dnn_listener.ps1 <ag name> <listener-name> <listener


port> . For example:
Console

c:\Documents> .\add_dnn_listener.ps1 ag1 dnnlsnr 6789

Verify listener
Use either SQL Server Management Studio or Transact-SQL to confirm your DNN
listener is created successfully.

SQL Server Management Studio


Expand Availability Group Listeners in SQL Server Management Studio (SSMS) to view
your DNN listener:

Transact-SQL
Use Transact-SQL to view the status of the DNN listener:

SQL
SELECT * FROM SYS.AVAILABILITY_GROUP_LISTENERS

A value of 1 for is_distributed_network_name indicates the listener is a distributed


network name (DNN) listener:

Update connection string


Update the connection string for any application that needs to connect to the DNN
listener. The connection string to the DNN listener must provide the DNN port number,
and specify MultiSubnetFailover=True in the connection string. If the SQL client does
not support the MultiSubnetFailover=True parameter, then it is not compatible with a
DNN listener.

The following is an example of a connection string for listener name DNN_Listener and
port 6789:

DataSource=DNN_Listener,6789;MultiSubnetFailover=True

Test failover
Test failover of the availability group to ensure functionality.

To test failover, follow these steps:

1. Connect to the DNN listener or one of the replicas by using SQL Server
Management Studio (SSMS).
2. Expand Always On Availability Group in Object Explorer.
3. Right-click the availability group and choose Failover to open the Failover Wizard.
4. Follow the prompts to choose a failover target and fail the availability group over
to a secondary replica.
5. Confirm the database is in a synchronized state on the new primary replica.
6. (Optional) Fail back to the original primary, or another secondary replica.
Test connectivity
Test the connectivity to your DNN listener with these steps:

1. Open SQL Server Management Studio.


2. Connect to your DNN listener.
3. Open a new query window and check which replica you're connected to by
running SELECT @@SERVERNAME .
4. Fail the availability group over to another replica.
5. After a reasonable amount of time, run SELECT @@SERVERNAME to confirm your
availability group is now hosted on another replica.

Limitations
DNN Listeners MUST be configured with a unique port. The port cannot be shared
with any other connection on any replica.
The client connecting to the DNN listener must support the
MultiSubnetFailover=True parameter in the connection string.
There might be additional considerations when you're working with other SQL
Server features and an availability group with a DNN. For more information, see AG
with DNN interoperability.

Port considerations
DNN listeners are designed to listen on all IP addresses, but on a specific, unique port.
The DNS entry for the listener name should resolve to the addresses of all replicas in the
availability group. This is done automatically with the PowerShell script provided in the
Create Script section. Since DNN listeners accept connections on all IP addresses, it is
critical that the listener port be unique, and not in use by any other replica in the
availability group. Since SQL Server listens on port 1433 by default, either directly or via
the SQL Browser service, using port 1433 for the DNN listener is strongly discouraged.

If the listener port chosen for the VNN listener is between 49,152 and 65,536 (the
default dynamic port range for TCP/IP, add an exclusion for this. Doing so will prevent
other systems from being dynamically assigned the same port.

You can add a port exclusion with the following command:


netsh int ipv4 add
excludedportrange tcp startport=<Listener Port> numberofports=1 store=persistent

Next steps
Once the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Always On availability groups with SQL Server on Azure VMs
Always On availability groups overview
Feature interoperability with AG and
DNN listener
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

There are certain SQL Server features that rely on a hard-coded virtual network name
(VNN). As such, when using the distributed network name (DNN) listener with your
Always On availability group and SQL Server on Azure VMs in a single subnet, there may
be some additional considerations.

This article details SQL Server features and interoperability with the availability group
DNN listener.

Behavior differences
There are some behavior differences between the functionality of the VNN listener and
DNN listener that are important to note:

Failover time: Failover time is faster when using a DNN listener since there is no
need to wait for the network load balancer to detect the failure event and change
its routing.
Existing connections: Connections made to a specific database within a failing-over
availability group will close, but other connections to the primary replica will
remain open since the DNN stays online during the failover process. This is
different than a traditional VNN environment where all connections to the primary
replica typically close when the availability group fails over, the listener goes
offline, and the primary replica transitions to the secondary role. When using a
DNN listener, you may need to adjust application connection strings to ensure that
connections are redirected to the new primary replica upon failover.
Open transactions: Open transactions against a database in a failing-over
availability group will close and roll back, and you need to manually reconnect. For
example, in SQL Server Management Studio, close the query window and open a
new one.

Client drivers
For ODBC, OLEDB, ADO.NET, JDBC, PHP, and Node.js drivers, users need to explicitly
specify the DNN listener name and port as the server name in the connection string. To
ensure rapid connectivity upon failover, add MultiSubnetFailover=True to the
connection string if the SQL client supports it.

Tools
Users of SQL Server Management Studio, sqlcmd, Azure Data Studio, and SQL Server
Data Tools need to explicitly specify the DNN listener name and port as the server name
in the connection string to connect to the listener.

Creating the DNN listener via the SQL Server Management Studio (SSMS) GUI is
currently not supported.

Availability groups and FCI


You can configure an Always On availability group by using a failover cluster instance
(FCI) as one of the replicas. For this configuration to work with the DNN listener, the
failover cluster instance must also use the DNN as there is no way to put the FCI virtual
IP address in the AG DNN IP list.

In this configuration, the mirroring endpoint URL for the FCI replica needs to use the FCI
DNN. Likewise, if the FCI is used as a read-only replica, the read-only routing to the FCI
replica needs to use the FCI DNN.

The format for the mirroring endpoint is: ENDPOINT_URL = 'TCP://<FCI DNN DNS name>:
<mirroring endpoint port>' .

For example, if your FCI DNN DNS name is dnnlsnr , and 5022 is the port of the FCI's
mirroring endpoint, the Transact-SQL (T-SQL) code snippet to create the endpoint URL
looks like:

SQL
ENDPOINT_URL = 'TCP://dnnlsnr:5022'

Likewise, the format for the read-only routing URL is: TCP://<FCI DNN DNS name>:<SQL
Server instance port> .

For example, if your DNN DNS name is dnnlsnr , and 1444 is the port used by the read-
only target SQL Server FCI, the T-SQL code snippet to create the read-only routing URL
looks like:

SQL

READ_ONLY_ROUTING_URL = 'TCP://dnnlsnr:1444'

You can omit the port in the URL if it is the default 1433 port. For a named instance,
configure a static port for the named instance and specify it in the read-only routing
URL.

Distributed availability group


If your availability group listener is configured using a distributed network name (DNN),
then configuring a distributed availability group on top of your availability group is not
supported.

Replication
Transactional, Merge, and Snapshot Replication all support replacing the VNN listener
with the DNN listener and port in replication objects that connect to the listener.

For more information on how to use replication with availability groups, see Publisher
and AG, Subscriber and AG, and Distributor and AG.

MSDTC
Both local and clustered MSDTC are supported but MSDTC uses a dynamic port, which
requires a standard Azure Load Balancer to configure the HA port. As such, either the
VM must use a standard IP reservation, or it cannot be exposed to the internet.

Define two rules, one for the RPC Endpoint Mapper port 135, and one for the real
MSDTC port. After failover, modify the LB rule to the new MSDTC port after it changes
on the new node.
If the MSDTC is local, be sure to allow outbound communication.

Distributed query
Distributed query relies on a linked server, which can be configured using the AG DNN
listener and port. If the port is not 1433, choose the Use other data source option in
SQL Server Management Studio (SSMS) when configuring your linked server.

FileStream
Filestream is supported but not for scenarios where users access the scoped file share by
using the Windows File API.

Filetable
Filetable is supported but not for scenarios where users access the scoped file share by
using the Windows File API.

Linked servers
Configure the linked server using the AG DNN listener name and port. If the port is not
1433, choose the Use other data source option in SQL Server Management Studio
(SSMS) when configuring your linked server.

Frequently asked questions


Which SQL Server version brings AG DNN listener support?

SQL Server 2019 CU8 and later.

What is the expected failover time when the DNN listener is used?

For DNN listener, the failover time will be just the AG failover time, without any
additional time (like probe time when you're using Azure Load Balancer).

Is there any version requirement for SQL clients to support DNN with OLEDB and
ODBC?

We recommend MultiSubnetFailover=True connection string support for DNN


listener. It's available starting with SQL Server 2012 (11.x).
Are any SQL Server configuration changes required for me to use the DNN
listener?

SQL Server does not require any configuration change to use DNN, but some SQL
Server features might require more consideration.

Does DNN support multiple-subnet clusters?

Yes. The cluster binds the DNN in DNS with the physical IP addresses of all replicas
in the availability group regardless of the subnet. The SQL client tries all IP
addresses of the DNS name regardless of the subnet.

Does the availability group DNN listener support read-only routing?

Yes. Read-only routing is supported with the DNN listener.

Next steps
To learn more, see:

Always On availability groups with SQL Server on Azure VMs


Windows Server Failover Cluster with SQL Server on Azure VMs
Always On availability groups overview
HADR settings for SQL Server on Azure VMs
Prepare virtual machines for an FCI (SQL
Server on Azure VMs)
Article • 03/02/2023

Applies to:
SQL Server on Azure VM

This article describes how to prepare Azure virtual machines (VMs) to use them with a
SQL Server failover cluster instance (FCI). Configuration settings vary depending on the
FCI storage solution, so validate that you're choosing the correct configuration to suit
your environment and business.

To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.

7 Note

It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.

Prerequisites
A Microsoft Azure subscription. Get started with a free Azure account .
A Windows domain on Azure virtual machines or an on-premises active directory
extended to Azure with virtual network pairing.
An account that has permissions to create objects on Azure virtual machines and in
Active Directory.
An Azure virtual network and one or more subnets with enough IP address space
for these components:
Both virtual machines
An IP address for the Windows failover cluster
An IP address for each FCI
DNS configured on the Azure network, pointing to the domain controllers.

Choose an FCI storage option


The configuration settings for your virtual machine vary depending on the storage
option you're planning to use for your SQL Server failover cluster instance. Before you
prepare the virtual machine, review the available FCI storage options and choose the
option that best suits your environment and business need. Then carefully select the
appropriate VM configuration options throughout this article based on your storage
selection.

Choose VM availability
The failover cluster feature requires virtual machines to be placed in an availability set or
an availability zone.

Carefully select the VM availability option that matches your intended cluster
configuration:

Azure shared disks: the availability option varies if you're using Premium SSD or
UltraDisk:
Premium SSD Zone Redundant Storage (ZRS):
Availability Zone in different
zones. Premium SSD ZRS replicates your Azure managed disk synchronously
across three Azure availability zones in the selected region. VMs part of failover
cluster can be placed in different availability zones, helping you achieve a zone-
redundant SQL Server FCI that provides a VM availability SLA of 99.99%. Disk
latency for ZRS is higher due to the cross-zonal copy of data.
Premium SSD Locally Redundant Storage (LRS):
Availability Set in different
fault/update domains for Premium SSD LRS. You can also choose to place the
VMs inside a proximity placement group to locate them closer to each other.
Combining availability set and proximity placement group provides the lowest
latency for shared disks as data is replicated locally within one data center and
provides VM availability SLA of 99.95%.
Ultra Disk Locally Redundant Storage (LRS):
Availability zone but the VMs must
be placed in the same availability zone. Ultra disks offers lowest disk latency and
is best for IO intensive workloads. Since all VMs part of the FCI have be in the
same availability zone, the VM availability is only 99.9%.
Premium file shares: Availability set or Availability Zone.
Storage Spaces Direct: Availability Set.

) Important

You can't set or change the availability set after you've created a virtual machine.

Subnets
For SQL Server on Azure VMs, you have the option to deploy your SQL Server VMs to a
single subnet, or to multiple subnets.

Deploying your VMs to multiple subnets leverages the cluster OR dependency for IP
addresses and matches the on-premises experience when connecting to your failover
cluster instance. The multi-subnet approach is recommend for SQL Server on Azure VMs
for simpler manageability, and faster failover times.

Deploying your VMs to a single subnet requires an additional dependency on an Azure


Load Balancer or distributed network name (DNN) to route traffic to your FCI.

If you deploy your SQL Server VMs to multiple subnets, follow the steps in this section
to create your virtual networks with additional subnets, and then once the SQL Server
VMs are created, assign secondary IP addresses within those subnets to the VMs.
Deploying your SQL Server VMs to a single subnet does not require any additional
network configuration.

Single subnet

Place both virtual machines in a single subnet that has enough IP addresses for
both virtual machines and all FCIs that you might eventually install to the cluster.
This approach requires an extra component to route connections to your FCI, such
as an Azure Load Balancer or a distributed network name (DNN).

If you choose to deploy your SQL Server VMs to a single subnet review the
differences between the Azure Load Balancer and DNN connectivity options and
decide which option works best for you before preparing the rest of your
environment for your FCI.

Deploying your SQL Server VMs to a single subnet does not require any additional
network configuration.

Configure DNS
Configure your virtual network to use your DNS server. First, identify the DNS IP address,
and then add it to your virtual network.

Identify DNS IP address


Identify the IP address of the DNS server, and then add it to the virtual network
configuration. This section demonstrates how to identify the DNS IP address if the DNS
server is on a virtual machine in Azure.

To identify the IP address of the DNS server VM in the Azure portal, follow these steps:

1. Go to your resource group in the Azure portal and select the DNS server VM.
2. On the VM page, choose Networking in the Settings pane.
3. Note the NIC Private IP address as this is the IP address of the DNS server. In the
example image, the private IP address is 10.38.0.4.

Configure virtual network DNS


Configure the virtual network to use this the DNS server IP address.

To configure your virtual network for DNS, follow these steps:

1. Go to your resource group in the Azure portal , and select your virtual network.
2. Select DNS servers under the Settings pane and then select Custom.
3. Enter the private IP address you identified previously in the IP Address field, such
as 10.38.0.4 , or provide the internal IP address of your internal DNS server.
4. Select Save.
Create the virtual machines
After you've configured your VM virtual network and chosen VM availability, you're
ready to create your virtual machines. You can choose to use an Azure Marketplace
image that does or doesn't have SQL Server already installed on it. However, if you
choose an image for SQL Server on Azure VMs, you'll need to uninstall SQL Server from
the virtual machine before configuring the failover cluster instance.

NIC considerations
On an Azure VM guest failover cluster, we recommend a single NIC per server (cluster
node). Azure networking has physical redundancy, which makes additional NICs
unnecessary on an Azure IaaS VM guest cluster. Although the cluster validation report
will issue a warning that the nodes are only reachable on a single network, this warning
can be safely ignored on Azure IaaS VM guest failover clusters.

Place both virtual machines:

In the same Azure resource group as your availability set, if you're using availability
sets.
On the same virtual network as your domain controller and DNS server or on a
virtual network that has suitable connectivity to your domain controller.
In the Azure availability set or availability zone.
You can create an Azure virtual machine by using an image with or without SQL Server
preinstalled to it. If you choose the SQL Server image, you'll need to manually uninstall
the SQL Server instance before installing the failover cluster instance.

Assign secondary IP addresses


If you deployed your SQL Server VMs to a single subnet, skip this step. If you deployed
your SQL Server VMs to multiple subnets for improved connectivity to your FCI, you
need to assign the secondary IP addresses to each VM.

Assign secondary IP addresses to each SQL Server VM to use for the failover cluster
instance network name, and for Windows Server 2016 and earlier, assign secondary IP
addresses to each SQL Server VM for the cluster network name as well. Doing this
negates the need for an Azure Load Balancer, as is the requirement in a single subnet
environment.

On Windows Server 2016 and earlier, you need to assign an additional secondary IP
address to each SQL Server VM to use for the windows cluster IP since the cluster uses
the Cluster Network Name rather than the default distributed network name (DNN)
introduced in Windows Server 2019. With a DNN, the cluster name object (CNO) is
automatically registered with the IP addresses for all the nodes of the cluster,
eliminating the need for a dedicated windows cluster IP address.

If you're on Windows Server 2016 and prior, follow the steps in this section to assign a
secondary IP address to each SQL Server VM for both the FCI network name, and the
cluster.

If you're on Windows Server 2019 or later, only assign a secondary IP address for the FCI
network name, and skip the steps to assign a windows cluster IP, unless you plan to
configure your cluster with a virtual network name (VNN), in which case assign both IP
addresses to each SQL Server VM as you would for Windows Server 2016.

To assign additional secondary IPs to the VMs, follow these steps:

1. Go to your resource group in the Azure portal and select the first SQL Server
VM.

2. Select Networking in the Settings pane, and then select the Network Interface:
3. On the Network Interface page, select IP configurations in the Settings pane and
then choose + Add to add an additional IP address:

4. On the Add IP configuration page, do the following:


a. Specify the Name for the Windows Cluster IP address, such as windows-cluster-
ip for Windows 2016 and earlier. Skip this step if you're on Windows Server
2019 or later.
b. Set the Allocation to Static.
c. Enter an unused IP address in the same subnet (SQL-subnet-1) as the SQL
Server VM, such as 10.38.1.10 .
d. Leave the Public IP address at the default of Disassociate.
e. Select OK to finish adding the IP configuration.
5. Select + Add again to configure an additional IP address for the FCI network name
(with a name such as FCI-network-name), again specifying an unused IP address in
SQL-subnet-1 such as 10.38.1.11 :
6. Repeat these steps again for the second SQL Server VM. Assign two unused
secondary IP addresses within SQL-subnet-2. Use the values from the following
table to add the IP configuration (though the IP addresses are just examples, yours
may vary):

Field Input Input

Name windows-cluster-ip FCI-network-name

Allocation Static Static

IP address 10.38.2.10 10.38.2.11

Uninstall SQL Server


As part of the FCI creation process, you'll install SQL Server as a clustered instance to the
failover cluster. If you deployed a virtual machine with an Azure Marketplace image
without SQL Server, you can skip this step. If you deployed an image with SQL Server
preinstalled, you'll need to unregister the SQL Server VM from the SQL IaaS Agent
extension, and then uninstall SQL Server.
Unregister from the SQL IaaS Agent extension
SQL Server VM images from Azure Marketplace are automatically registered with the
SQL IaaS Agent extension. Before you uninstall the preinstalled SQL Server instance, you
must first unregister each SQL Server VM from the SQL IaaS Agent extension.

Uninstall SQL Server


After you've unregistered from the extension, you can uninstall SQL Server. Follow these
steps on each virtual machine:

1. Connect to the virtual machine by using RDP. When you first connect to a virtual
machine by using RDP, a prompt asks you if you want to allow the PC to be
discoverable on the network. Select Yes.
2. Open Programs and Features in the Control Panel.
3. In Programs and Features, right-click Microsoft SQL Server 201_ (64-bit) and
select Uninstall/Change.
4. Select Remove.
5. Select the default instance.
6. Remove all features under Database Engine Services, Analysis Services and
Reporting Services - Native. Don't remove anything under SharedFeatures. You'll
see something like the following screenshot:

7. Select Next, and then select Remove.


8. After the instance is successfully removed, restart the virtual machine.

Open the firewall


On each virtual machine, open the Windows Firewall TCP port that SQL Server uses. By
default SQL Server uses port 1433, but if you changed this in your environment, open
the port you've configured your SQL Server instance to use. Port 1433 is automatically
open on SQL Server images deployed from Azure Marketplace.

If you use a load balancer for single subnet scenario, you'll also need to open the port
that the health probe uses. By default, the health probe uses port 59999, but it can be
any TCP port that you specify when you create the load balancer.

This table details the ports that you might need to open, depending on your FCI
configuration:

Purpose Port Notes

SQL TCP Normal port for default instances of SQL Server. If you used an image from
Server 1433 the gallery, this port is automatically opened.

Used by: All FCI configurations.

Health TCP Any open TCP port. Configure the load balancer health probe and the cluster
probe 59999 to use this port.

Used by: FCI with load balancer in single subnet scenario.

File UDP Port that the file share service uses.

share 445
Used by: FCI with Premium file share.

Join the domain


You also need to join your virtual machines to the domain. You can do so by using a
quickstart template.

Review storage configuration


Virtual machines created from Azure Marketplace come with attached storage. If you
plan to configure your FCI storage by using Premium file shares or Azure shared disks,
you can remove the attached storage to save on costs because local storage is not used
for the failover cluster instance. However, it's possible to use the attached storage for
Storage Spaces Direct FCI solutions, so removing them in this case might be unhelpful.
Review your FCI storage solution to determine if removing attached storage is optimal
for saving costs.

Next steps
Now that you've prepared your virtual machine environment, you're ready to configure
your failover cluster instance.

Choose one of the following guides to configure the FCI environment that's appropriate
for your business:

Configure FCI with Azure shared disks


Configure FCI with a Premium file share
Configure FCI with Storage Spaces Direct

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
HADR settings for SQL Server on Azure VMs
Create an FCI with Azure shared disks
(SQL Server on Azure VMs)
Article • 04/18/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article explains how to create a failover cluster instance (FCI) by using Azure shared
disks with SQL Server on Azure Virtual Machines (VMs).

To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.

7 Note

It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.

Prerequisites
Before you complete the instructions in this article, you should already have:

An Azure subscription. Get started with a free Azure account .


Two or more prepared Windows Azure virtual machines in an availability set, or
availability zones.
An account that has permissions to create objects on both Azure virtual machines
and in Active Directory.
The latest version of Azure PowerShell.
Add Azure shared disk
Deploy a managed Premium SSD disk with the shared disk feature enabled. Set
maxShares to align with the number of cluster nodes to make the disk shareable across

all FCI nodes.

Attach shared disk to VMs


Once you've deployed a shared disk with maxShares > 1, you can mount the disk to the
VMs that will participate as nodes in the cluster.

To attach the shared disk to your SQL Server VMs, follow these steps:

1. Select the VM in the Azure portal that you will attach the shared disk to.
2. Select Disks in the Settings pane.
3. Select Attach existing disks to attach the shared disk to the VM.
4. Choose the shared disk from the Disk name drop-down.
5. Select Save.
6. Repeat these steps for every cluster node SQL Server VM.

After a few moments, the shared data disk is attached to the VM and appears in the list
of Data disks for that VM.

Initialize shared disk


Once the shared disk is attached on all the VMs, you can initialize the disks of the VMs
that will participate as nodes in the cluster. Initialize the disks on all of the VMs.

To initialize the disks for your SQL Server VM, follow these steps:

1. Connect to one of the VMs.


2. From inside the VM, open the Start menu and type diskmgmt.msc in the search
box to open the Disk Management console.
3. Disk Management recognizes that you have a new, uninitialized disk and the
Initialize Disk window appears.
4. Verify the new disk is selected and then select OK to initialize it.
5. The new disk appears as unallocated. Right-click anywhere on the disk and select
New simple volume. The New Simple Volume Wizard window opens.
6. Proceed through the wizard, keeping all of the defaults, and when you're done
select Finish.
7. Close Disk Management.
8. A pop-up window appears notifying you that you need to format the new disk
before you can use it. Select Format disk.
9. In the Format new disk window, check the settings, and then select Start.
10. A warning appears notifying you that formatting the disks erases all of the data.
Select OK.
11. When the formatting is complete, select OK.
12. Repeat these steps on each SQL Server VM that will participate in the FCI.

Create Windows Failover Cluster


The steps to create your Windows Server Failover cluster vary depending on if you
deployed your SQL Server VMs to a single subnet, or multiple subnets. To create your
cluster, follow the steps in the tutorial for either a multi-subnet scenario or a single
subnet scenario. Though these tutorials are for creating an availability group, the steps
to create the cluster are the same.

Configure quorum
Since the disk witness is the most resilient quorum option, and the FCI solution uses
Azure shared disks, it's recommended to configure a disk witness as the quorum
solution.

If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.

Validate cluster
Validate the cluster on one of the virtual machines by using the Failover Cluster Manager
UI or PowerShell.

To validate the cluster using the UI, follow these steps:

1. Under Server Manager, select Tools, and then select Failover Cluster Manager.
2. Under Failover Cluster Manager, select Action, and then select Validate
Configuration.
3. Select Next.
4. Under Select Servers or a Cluster, enter the names of both virtual machines.
5. Under Testing options, select Run only tests I select.
6. Select Next.
7. Under Test Selection, select all tests except Storage.
8. Select Next.
9. Under Confirmation, select Next. The Validate a Configuration wizard runs the
validation tests.

To validate the cluster by using PowerShell, run the following script from an
administrator PowerShell session on one of the virtual machines:

PowerShell

Test-Cluster –Node ("<node1>","<node2>") –Include "Inventory", "Network",


"System Configuration"

Test cluster failover


Test the failover of your cluster. In Failover Cluster Manager, right-click your cluster,
select More Actions > Move Core Cluster Resource > Select node, and then select the
other node of the cluster. Move the core cluster resource to every node of the cluster,
and then move it back to the primary node. Ensure you can successfully move the
cluster to each node before installing SQL Server.

Add shared disks to cluster


Use the Failover Cluster Manager to add the attached Azure shared disks to the cluster.

To add disks to your cluster, follow these steps:


1. In the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.

2. Select the cluster and expand it in the navigation pane.

3. Select Storage and then select Disks.

4. Right-click Disks and select Add Disk:

5. Choose the Azure shared disk in the Add Disks to a Cluster window. Select OK.
6. After the shared disk is added to the cluster, you will see it in the Failover Cluster
Manager.

Create SQL Server FCI


After you've configured the failover cluster and all cluster components, including
storage, you can create the SQL Server FCI.

1. Connect to the first virtual machine by using Remote Desktop Protocol (RDP).

2. In Failover Cluster Manager, make sure that all core cluster resources are on the
first virtual machine. If necessary, move the disks to that virtual machine.
3. If the version of the operating system is Windows Server 2019 and the Windows
Cluster was created using the default Distributed Network Name (DNN) , then
the FCI installation for SQL Server 2017 and below will fail with the error The given
key was not present in the dictionary .

During installation, SQL Server setup queries for the existing Virtual Network Name
(VNN) and doesn't recognize the Windows Cluster DNN. The issue has been fixed
in SQL Server 2019 setup. For SQL Server 2017 and below, follow these steps to
avoid the installation error:

In Failover Cluster Manager, connect to the cluster, right-click Roles and


select Create Empty Role.
Right-click the newly created empty role, select Add Resource and select
Client Access Point.
Enter any name and complete the wizard to create the Client Access Point.
After the SQL Server FCI installation completes, the role containing the
temporary Client Access Point can be deleted.

4. Locate the installation media. If the virtual machine uses one of the Azure
Marketplace images, the media is located at C:\SQLServer_<version number>_Full .

5. Select Setup.

6. In SQL Server Installation Center, select Installation.

7. Select New SQL Server failover cluster installation. Follow the instructions in the
wizard to install the SQL Server FCI.

8. On the Cluster Disk Selection page, select all the shared disks that were attached
to the VM.
9. On the Cluster Network Configuration page, the IP you provide varies depending
on if your SQL Server VMs were deployed to a single subnet, or multiple subnets.
a. For a single subnet environment, provide the IP address that you plan to add
to the Azure Load Balancer
b. For a multi-subnet environment, provide the secondary IP address in the
subnet of the first SQL Server VM that you previously designated as the IP
address of the failover cluster instance network name:
10. On the Database Engine Configuration page, ensure the database directories are
on the Azure shared disk(s).

11. After you complete the instructions in the wizard, setup installs the SQL Server FCI
on the first node.

12. After FCI installation succeeds on the first node, connect to the second node by
using RDP.

13. Open the SQL Server Installation Center, and then select Installation.

14. Select Add node to a SQL Server failover cluster. Follow the instructions in the
wizard to install SQL Server and add the node to the FCI.

15. For a multi-subnet scenario, in Cluster Network Configuration, enter the


secondary IP address in the subnet of the second SQL Server VM subnet that you
previously designated as the IP address of the failover cluster instance network
name
After selecting Next in Cluster Network Configuration, setup shows a dialog box
indicating that SQL Server Setup detected multiple subnets as in the example
image. Select Yes to confirm.

16. After you complete the instructions in the wizard, setup adds the second SQL
Server FCI node.

17. Repeat these steps on any other SQL Server VMs you want to participate in the
SQL Server failover cluster instance.

7 Note

Azure Marketplace gallery images come with SQL Server Management Studio
installed. If you didn't use a marketplace image Download SQL Server
Management Studio (SSMS).
Register with SQL IaaS Agent extension
To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent
extension. Note that only limited functionality will be available on SQL VMs that have
failover clustered instances of SQL Server (FCIs).

If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.

Register a SQL Server VM with PowerShell (-LicenseType can be PAYG or AHUB ):

PowerShell

# Get the existing compute VM

$vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>

# Register SQL VM with SQL IaaS Agent extension

New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -


Location $vm.Location `
-LicenseType <license_type>

Configure connectivity
If you deployed your SQL Server VMs in multiple subnets, skip this step. If you deployed
your SQL Server VMs to a single subnet, then you'll need to configure an additional
component to route traffic to your FCI. You can configure a virtual network name (VNN)
with an Azure Load Balancer, or a distributed network name for a failover cluster
instance. Review the differences between the two and then deploy either a distributed
network name or a virtual network name and Azure Load Balancer for your failover
cluster instance.

Limitations
Azure virtual machines support Microsoft Distributed Transaction Coordinator
(MSDTC) on Windows Server 2019 with storage on CSVs and a standard load
balancer. MSDTC is not supported on Windows Server 2016 and earlier.
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal
management. See the table of benefits.
Next steps
If Azure shared disks are not the appropriate FCI storage solution for you, consider
creating your FCI using premium file shares or Storage Spaces Direct instead.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
HADR settings for SQL Server on Azure VMs
Create an FCI with Storage Spaces Direct
(SQL Server on Azure VMs)
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article explains how to create a failover cluster instance (FCI) by using Storage
Spaces Direct with SQL Server on Azure Virtual Machines (VMs). Storage Spaces Direct
acts as a software-based virtual storage area network (VSAN) that synchronizes the
storage (data disks) between the nodes (Azure VMs) in a Windows cluster.

To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.

7 Note

It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.

Overview
Storage Spaces Direct (S2D) supports two types of architectures: converged and
hyperconverged. A hyperconverged infrastructure places the storage on the same
servers that host the clustered application, so that storage is on each SQL Server FCI
node.

The following diagram shows the complete solution, which uses hyperconverged
Storage Spaces Direct with SQL Server on Azure VMs:
The preceding diagram shows the following resources in the same resource group:

Two virtual machines in a Windows Server failover cluster. When a virtual machine
is in a failover cluster, it's also called a cluster node or node.
Each virtual machine has two or more data disks.
Storage Spaces Direct synchronizes the data on the data disks and presents the
synchronized storage as a storage pool.
The storage pool presents a Cluster Shared Volume (CSV) to the failover cluster.
The SQL Server FCI cluster role uses the CSV for the data drives.
An Azure load balancer to hold the IP address for the SQL Server FCI for a single
subnet scenario.
An Azure availability set holds all the resources.

7 Note

You can create this entire solution in Azure from a template. An example of a
template is available on the GitHub Azure quickstart templates page. This
example isn't designed or tested for any specific workload. You can run the
template to create a SQL Server FCI with Storage Spaces Direct storage connected
to your domain. You can evaluate the template and modify it for your purposes.

Prerequisites
Before you complete the instructions in this article, you should already have:
An Azure subscription. Get started with a free Azure account .
Two or more prepared Windows Azure virtual machines in an availability set.
An account that has permissions to create objects on both Azure virtual machines
and in Active Directory.
The latest version of PowerShell.

Create Windows Failover Cluster


The steps to create your Windows Server Failover cluster vary depending on if you
deployed your SQL Server VMs to a single subnet, or multiple subnets. To create your
cluster, follow the steps in the tutorial for either a multi-subnet scenario or a single
subnet scenario. Though these tutorials are for creating an availability group, the steps
to create the cluster are the same.

Configure quorum
Although the disk witness is the most resilient quorum option, it's not supported for
failover cluster instances configured with Storage Spaces Direct. As such, the cloud
witness is the recommended quorum solution for this type of cluster configuration for
SQL Server on Azure VMs.

If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.

Validate the cluster


Validate the cluster in the Failover Cluster Manager UI or by using PowerShell.

To validate the cluster by using the UI, do the following on one of the virtual machines:

1. Under Server Manager, select Tools, and then select Failover Cluster Manager.

2. Under Failover Cluster Manager, select Action, and then select Validate
Configuration.

3. Select Next.

4. Under Select Servers or a Cluster, enter the names of both virtual machines.

5. Under Testing options, select Run only tests I select.

6. Select Next.
7. Under Test Selection, select all tests except for Storage, as shown here:

8. Select Next.

9. Under Confirmation, select Next.

The Validate a Configuration wizard runs the validation tests.

To validate the cluster by using PowerShell, run the following script from an
administrator PowerShell session on one of the virtual machines:

PowerShell

Test-Cluster –Node ("<node1>","<node2>") –Include "Storage Spaces Direct",


"Inventory", "Network", "System Configuration"

Add storage
The disks for Storage Spaces Direct need to be empty. They can't contain partitions or
other data. To clean the disks, follow the instructions in Deploy Storage Spaces Direct.

1. Enable Storage Spaces Direct.

The following PowerShell script enables Storage Spaces Direct:

PowerShell
Enable-ClusterS2D

In Failover Cluster Manager, you can now see the storage pool.

2. Create a volume.

Storage Spaces Direct automatically creates a storage pool when you enable it.
You're now ready to create a volume. The PowerShell cmdlet New-Volume
automates the volume creation process. This process includes formatting, adding
the volume to the cluster, and creating a CSV. This example creates an 800
gigabyte (GB) CSV:

PowerShell

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDisk01 -


FileSystem CSVFS_REFS -Size 800GB

After you've run the preceding command, an 800-GB volume is mounted as a


cluster resource. The volume is at C:\ClusterStorage\Volume1\ .

This screenshot shows a CSV with Storage Spaces Direct:

Test cluster failover


Test the failover of your cluster. In Failover Cluster Manager, right-click your cluster,
select More Actions > Move Core Cluster Resource > Select node, and then select the
other node of the cluster. Move the core cluster resource to every node of the cluster,
and then move it back to the primary node. If you can successfully move the cluster to
each node, you're ready to install SQL Server.
Create SQL Server FCI
After you've configured the failover cluster and all cluster components, including
storage, you can create the SQL Server FCI.

1. Connect to the first virtual machine by using RDP.

2. In Failover Cluster Manager, make sure all core cluster resources are on the first
virtual machine. If necessary, move all resources to that virtual machine.

3. If the version of the operating system is Windows Server 2019 and the Windows
Cluster was created using the default Distributed Network Name (DNN) , then
the FCI installation for SQL Server 2017 and below will fail with the error The given
key was not present in the dictionary .

During installation, SQL Server setup queries for the existing Virtual Network Name
(VNN) and doesn't recognize the Windows Cluster DNN. The issue has been fixed
in SQL Server 2019 setup. For SQL Server 2017 and below, follow these steps to
avoid the installation error:

In Failover Cluster Manager, connect to the cluster, right-click Roles and


select Create Empty Role.
Right-click the newly created empty role, select Add Resource and select
Client Access Point.
Enter any name and complete the wizard to create the Client Access Point.
After the SQL Server FCI installation completes, the role containing the
temporary Client Access Point can be deleted.
4. Locate the installation media. If the virtual machine uses one of the Azure
Marketplace images, the media is located at C:\SQLServer_<version number>_Full .
Select Setup.

5. In SQL Server Installation Center, select Installation.

6. Select New SQL Server failover cluster installation. Follow the instructions in the
wizard to install the SQL Server FCI.

7. On the Cluster Network Configuration page, the IP you provide varies depending
on if your SQL Server VMs were deployed to a single subnet, or multiple subnets.
a. For a single subnet environment, provide the IP address that you plan to add
to the Azure Load Balancer
b. For a multi-subnet environment, provide the secondary IP address in the
subnet of the first SQL Server VM that you previously designated as the IP
address of the failover cluster instance network name:

8. In Database Engine Configuration, The FCI data directories need to be on


clustered storage. With Storage Spaces Direct, it's not a shared disk but a mount
point to a volume on each server. Storage Spaces Direct synchronizes the volume
between both nodes. The volume is presented to the cluster as a CSV. Use the CSV
mount point for the data directories.
9. After you complete the instructions in the wizard, Setup installs a SQL Server FCI
on the first node.

10. After FCI installation succeeds on the first node, connect to the second node by
using RDP.

11. Open the SQL Server Installation Center. Select Installation.

12. Select Add node to a SQL Server failover cluster. Follow the instructions in the
wizard to install SQL Server and add the node to the FCI.

13. For a multi-subnet scenario, in Cluster Network Configuration, enter the


secondary IP address in the subnet of the second SQL Server VM that you
previously designated as the IP address of the failover cluster instance network
name
After selecting Next in Cluster Network Configuration, setup shows a dialog box
indicating that SQL Server Setup detected multiple subnets as in the example
image. Select Yes to confirm.

14. After you complete the instructions in the wizard, setup adds the second SQL
Server FCI node.

15. Repeat these steps on any other nodes that you want to add to the SQL Server
failover cluster instance.

7 Note

Azure Marketplace gallery images come with SQL Server Management Studio
installed. If you didn't use a marketplace image Download SQL Server
Management Studio (SSMS).
Register with SQL IaaS Agent extension
To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent
extension. Note that only limited functionality will be available on SQL VMs that have
failover clustered instances of SQL Server (FCIs).

If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.

Register a SQL Server VM with PowerShell (-LicenseType can be PAYG or AHUB ):

PowerShell

# Get the existing compute VM

$vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>

# Register SQL VM with SQL IaaS Agent extension

New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -


Location $vm.Location `
-LicenseType <license_type>

Configure connectivity
If you deployed your SQL Server VMs in multiple subnets, skip this step. If you deployed
your SQL Server VMs to a single subnet, then you'll need to configure an additional
component to route traffic to your FCI. You can configure a virtual network name (VNN)
with an Azure Load Balancer, or a distributed network name for a failover cluster
instance. Review the differences between the two and then deploy either a distributed
network name or a virtual network name and Azure Load Balancer for your failover
cluster instance.

Limitations
Azure virtual machines support Microsoft Distributed Transaction Coordinator
(MSDTC) on Windows Server 2019 with storage on CSVs and a standard load
balancer. MSDTC is not supported on Windows Server 2016 and earlier.
Disks that have been attached as NTFS-formatted disks can be used with Storage
Spaces Direct only if the disk eligibility option is unchecked, or cleared, when
storage is being added to the cluster.
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal
management. See the table of benefits.
Failover cluster instances using Storage Spaces Direct as the shared storage do not
support using a disk witness for the quorum of the cluster. Use a cloud witness
instead.

Next steps
If Storage Spaces Direct isn't the appropriate FCI storage solution for you, consider
creating your FCI by using Azure shared disks or Premium File Shares instead.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
HADR settings for SQL Server on Azure VMs
Create an FCI with a premium file share
(SQL Server on Azure VMs)
Article • 03/30/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

This article explains how to create a failover cluster instance (FCI) with SQL Server on
Azure Virtual Machines (VMs) by using a premium file share.

Premium file shares are SSD backed and provide consistently low-latency file shares that
are fully supported for use with failover cluster instances for SQL Server 2012 or later on
Windows Server 2012 or later. Premium file shares give you greater flexibility, allowing
you to resize and scale a file share without any downtime.

To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.

7 Note

It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.

Prerequisites
Before you complete the instructions in this article, you should already have:

An Azure subscription.
An account that has permissions to create objects on both Azure virtual machines
and in Active Directory.
Two or more prepared Windows Azure virtual machines in an availability set or
different availability zones.
A premium file share to be used as the clustered drive, based on the storage quota
of your database for your data files.
The latest version of PowerShell.

Mount premium file share


To mount your premium file share, follow these steps:

1. Sign in to the Azure portal . and go to your storage account.

2. Go to File shares under Data storage, and then select the premium file share you
want to use for your SQL storage.

3. Select Connect to bring up the connection string for your file share.

4. In the drop-down list, select the drive letter you want to use, choose Storage
account key as the authentication method, and then copy the code block to a text
editor, such as Notepad.
5. Use Remote Desktop Protocol (RDP) to connect to the SQL Server VM with the
account that your SQL Server FCI will use for the service account.

6. Open an administrative PowerShell command console.

7. Run the command that you copied earlier to your text editor from the File share
portal.

8. Go to the share by using either File Explorer or the Run dialog box (Windows + R
on your keyboard). Use the network path
\\storageaccountname.file.core.windows.net\filesharename . For example,

\\sqlvmstorageaccount.file.core.windows.net\sqlpremiumfileshare

9. Create at least one folder on the newly connected file share to place your SQL data
files into.

10. Repeat these steps on each SQL Server VM that will participate in the cluster.

) Important

Consider using a separate file share for backup files to save the input/output
operations per second (IOPS) and space capacity of this share for data and log files.
You can use either a Premium or Standard File Share for backup files.

Create Windows Failover Cluster


The steps to create your Windows Server Failover cluster vary depending on if you
deployed your SQL Server VMs to a single subnet, or multiple subnets. To create your
cluster, follow the steps in the tutorial for either a multi-subnet scenario or a single
subnet scenario. Though these tutorials are for creating an availability group, the steps
to create the cluster are the same.

Configure quorum
The cloud witness is the recommended quorum solution for this type of cluster
configuration for SQL Server on Azure VMs.

If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.

Validate cluster
Validate the cluster on one of the virtual machines by using the Failover Cluster Manager
UI or PowerShell.

To validate the cluster by using the UI, do the following on one of the virtual machines:

1. Under Server Manager, select Tools, and then select Failover Cluster Manager.

2. Under Failover Cluster Manager, select Action, and then select Validate
Configuration.

3. Select Next.

4. Under Select Servers or a Cluster, enter the names of both virtual machines.

5. Under Testing options, select Run only tests I select.

6. Select Next.

7. Under Test Selection, select all tests except for Storage and Storage Spaces Direct,
as shown here:

8. Select Next.

9. Under Confirmation, select Next. The Validate a Configuration wizard runs the
validation tests.

To validate the cluster by using PowerShell, run the following script from an
administrator PowerShell session on one of the virtual machines:
PowerShell

Test-Cluster –Node ("<node1>","<node2>") –Include "Inventory", "Network",


"System Configuration"

Test cluster failover


Test the failover of your cluster. In Failover Cluster Manager, right-click your cluster,
select More Actions > Move Core Cluster Resource > Select node, and then select the
other node of the cluster. Move the core cluster resource to every node of the cluster,
and then move it back to the primary node. If you can successfully move the cluster to
each node, you're ready to install SQL Server.

Create SQL Server FCI


After you've configured the failover cluster, you can create the SQL Server FCI.

1. Connect to the first virtual machine by using RDP.

2. In Failover Cluster Manager, make sure that all the core cluster resources are on
the first virtual machine. If necessary, move all resources to this virtual machine.

3. If the version of the operating system is Windows Server 2019 and the Windows
Cluster was created using the default Distributed Network Name (DNN) , then
the FCI installation for SQL Server 2017 and below will fail with the error The given
key was not present in the dictionary .
During installation, SQL Server setup queries for the existing Virtual Network Name
(VNN) and doesn't recognize the Windows Cluster DNN. The issue has been fixed
in SQL Server 2019 setup. For SQL Server 2017 and below, follow these steps to
avoid the installation error:

In Failover Cluster Manager, connect to the cluster, right-click Roles and


select Create Empty Role.
Right-click the newly created empty role, select Add Resource and select
Client Access Point.
Enter any name and complete the wizard to create the Client Access Point.
After the SQL Server FCI installation completes, the role containing the
temporary Client Access Point can be deleted.

4. Locate the installation media. If the virtual machine uses one of the Azure
Marketplace images, the media is located at C:\SQLServer_<version number>_Full .

5. Select Setup.

6. In the SQL Server Installation Center, select Installation.

7. Select New SQL Server failover cluster installation, and then follow the
instructions in the wizard to install the SQL Server FCI.

8. On the Cluster Network Configuration page, the IP you provide varies depending
on if your SQL Server VMs were deployed to a single subnet, or multiple subnets.
a. For a single subnet environment, provide the IP address that you plan to add
to the Azure Load Balancer
b. For a multi-subnet environment, provide the secondary IP address in the
subnet of the first SQL Server VM that you previously designated as the IP
address of the failover cluster instance network name:
9. In Database Engine Configuration, the data directories need to be on the
premium file share. Enter the full path of the share, in this format:
\\storageaccountname.file.core.windows.net\filesharename\foldername . A warning

appears, telling you that you've specified a file server as the data directory. This
warning is expected. Ensure that the user account you used to access the VM via
RDP when you persisted the file share is the same account that the SQL Server
service uses to avoid possible failures.
10. After you complete the steps in the wizard, Setup installs a SQL Server FCI on the
first node.

11. After FCI installation succeeds on the first node, connect to the second node by
using RDP.

12. Open the SQL Server Installation Center, and then select Installation.

13. Select Add node to a SQL Server failover cluster. Follow the instructions in the
wizard to install SQL Server and add the node to the FCI.

14. For a multi-subnet scenario, in Cluster Network Configuration, enter the


secondary IP address in the subnet of the second SQL Server VM that you
previously designated as the IP address of the failover cluster instance network
name

After selecting Next in Cluster Network Configuration, setup shows a dialog box
indicating that SQL Server Setup detected multiple subnets as in the example
image. Select Yes to confirm.
15. After you complete the instructions in the wizard, setup adds the second SQL
Server FCI node.

16. Repeat these steps on any other nodes that you want to add to the SQL Server
failover cluster instance.

7 Note

Azure Marketplace gallery images come with SQL Server Management Studio
installed. If you didn't use a marketplace image Download SQL Server
Management Studio (SSMS).

Register with SQL IaaS Agent extension


To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent
extension. Note that only limited functionality will be available on SQL VMs that have
failover clustered instances of SQL Server (FCIs).

If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.

Register a SQL Server VM with PowerShell (-LicenseType can be PAYG or AHUB ):

PowerShell

# Get the existing compute VM

$vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name>

# Register SQL VM with SQL IaaS Agent extension

New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -


Location $vm.Location `
-LicenseType <license_type>

Configure connectivity
If you deployed your SQL Server VMs in multiple subnets, skip this step. If you deployed
your SQL Server VMs to a single subnet, then you'll need to configure an additional
component to route traffic to your FCI. You can configure a virtual network name (VNN)
with an Azure Load Balancer, or a distributed network name for a failover cluster
instance. Review the differences between the two and then deploy either a distributed
network name or a virtual network name and Azure Load Balancer for your failover
cluster instance.

Limitations
Microsoft Distributed Transaction Coordinator (MSDTC) is not supported on
Windows Server 2016 and earlier.
Filestream isn't supported for a failover cluster with a premium file share. To use
filestream, deploy your cluster by using Storage Spaces Direct or Azure shared
disks instead.
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal
management. See the table of benefits.
Database Snapshots are not currently supported with Azure Files due to sparse
files limitations.
Since database snapshots are not supported, CHECKDB for user databases falls
back to CHECKDB WITH TABLOCK. TABLOCK limits the checks that are performed -
DBCC CHECKCATALOG is not run on the database, and Service Broker data is not
validated.
DBCC CHECKDB on master and msdb database is not supported.
Databases that use the in-memory OLTP feature are not supported on a failover
cluster instance deployed with a premium file share. If your business requires in-
memory OLTP, consider deploying your FCI with Azure shared disks or Storage
Spaces Direct instead.

Limited extension support


At this time, SQL Server failover cluster instances on Azure virtual machines registered
with the SQL IaaS Agent extension only support a limited number of features. See the
table of benefits.

If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister from
the extension by deleting the SQL virtual machine resource for the corresponding VMs
and then register it with the SQL IaaS Agent extension again. When you're deleting the
SQL virtual machine resource by using the Azure portal, clear the check box next to the
correct virtual machine to avoid deleting the virtual machine.

Next steps
If premium file shares are not the appropriate FCI storage solution for you, consider
creating your FCI by using Azure shared disks or Storage Spaces Direct instead.

To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
HADR settings for SQL Server on Azure VMs
Configure an Azure load balancer for an
FCI VNN - SQL Server on Azure VMs
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

On Azure virtual machines, clusters use a load balancer to hold an IP address that needs
to be on one cluster node at a time. In this solution, the load balancer holds the IP
address for the virtual network name (VNN) that the clustered resource uses in Azure.

This article teaches you to configure a load balancer by using the Azure Load Balancer
service. The load balancer will route traffic to your failover cluster instance with SQL
Server on Azure VMs for high availability and disaster recovery (HADR).

For an alternative connectivity option for SQL Server 2019 CU2 and later, consider a
distributed network name (DNN) instead. A DNN offers simplified configuration and
improved failover.

Prerequisites
Before you complete the steps in this article, you should already have:

Determined that Azure Load Balancer is the appropriate connectivity option for
your FCI.
Configured your FCI.
Installed the latest version of PowerShell.

Create a load balancer


You can create either of these types of load balancers:
Internal: An internal load balancer can be accessed only from private resources
that are internal to the network. When you configure an internal load balancer and
its rules, use the FCI IP address as the front-end IP address.

External: An external load balancer can route traffic from the public to internal
resources. When you configure an external load balancer, you can't use a public IP
address like the FCI IP address.

To use an external load balancer, logically allocate an IP address in the same


subnet as the FCI that doesn't conflict with any other IP address. Use this address
as the front-end IP address for the load-balancing rules.

To create the load balancer:

1. In the Azure portal , go to the resource group that contains the virtual machines.

2. Select Add. Search Azure Marketplace for load balancer. Select Load Balancer.

3. Select Create.

4. In Create load balancer, on the Basics tab, set up the load balancer by using the
following values:

Subscription: Your Azure subscription.


Resource group: The resource group that contains your virtual machines.
Name: A name that identifies the load balancer.
Region: The Azure location that contains your virtual machines.
SKU: Standard.
Type: Either Public or Internal. An internal load balancer can be accessed
from within the virtual network. Most Azure applications can use an internal
load balancer. If your application needs access to SQL Server directly over the
internet, use a public load balancer.
Tier: Regional.

5. Select Next: Frontend IP configuration.

6. Select Add a frontend IP configuration.

7. Set up the front-end IP address by using the following values:

Name: A name that identifies the front-end IP configuration.


Virtual network: The same network as the virtual machines.
Subnet: The same subnet as the virtual machines.
Assignment: Static.
IP address: The IP address that you assigned to the clustered network
resource.
Availability zone: An optional availability zone to deploy your IP address to.

8. Select Add to create the front-end IP address.

9. Choose Review + Create to create the load balancer.

Configure a backend pool


1. Return to the Azure resource group that contains the virtual machines and locate
the new load balancer. You might need to refresh the view on the resource group.
Select the load balancer.

2. Select Backend pools, and then select +Add.

3. For Name, provide a name for the backend pool.

4. For Backend Pool Configuration, select NIC.

5. Select Add to associate the backend pool with the availability set that contains the
VMs.

6. Under Virtual machine, choose the virtual machines that will participate as cluster
nodes. Be sure to include all virtual machines that will host the FCI.

Add only the primary IP address of each VM. Don't add any secondary IP
addresses.

7. Select Add to add the virtual machines to the backend pool.

8. Select Save to create the backend pool.

Configure a health probe


1. On the pane for the load balancer, select Health probes.

2. On the Add health probe pane, set the following parameters:

Name: A name for the health probe.


Protocol: TCP.
Port: The port that you created in the firewall for the health probe when
preparing the VM. In this article, the example uses TCP port 59999.
Interval: 5 Seconds.

3. Select Add.

Set load-balancing rules


1. On the pane for the load balancer, select Load-balancing rules.

2. Select Add.

3. Set these parameters:


Name: A name for the load-balancing rule.
Frontend IP address: The IP address that you set when you configured the
frontend.
Backend pool: The backend pool that contains the virtual machines targeted
for the load balancer.
HA Ports: Enables load balancing on all ports for TCP and UDP protocols.
Protocol: TCP.
Port: The SQL Server TCP port. The default is 1433.
Backend port: The same port as the Port value when you enable Floating IP
(direct server return).
Health probe: The health probe that you configured earlier.
Session persistence: None.
Idle timeout (minutes): 4.
Floating IP (direct server return): Enabled.

4. Select Save.

Configure a cluster probe


Set the cluster probe's port parameter in PowerShell.

Private load balancer

Update the variables in the following script with values from your environment.
Remove the angle brackets ( < and > ) from the script.

PowerShell

$ClusterNetworkName = "<Cluster Network Name>"

$IPResourceName = "<SQL Server FCI IP Address Resource Name>"

$ILBIP = "<n.n.n.n>"

[int]$ProbePort = <nnnnn>

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple


@{"Address"="$ILBIP";"ProbePort"=$ProbePort;"SubnetMask"="255.255.255.25
5";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

The following table describes the values that you need to update:

Variable Value
Variable Value

ClusterNetworkName The name of the Windows Server failover cluster for the network. In
Failover Cluster Manager > Networks, right-click the network and
select Properties. The correct value is under Name on the General
tab.

IPResourceName The resource name for the IP address of the SQL Server FCI. In
Failover Cluster Manager > Roles, under the SQL Server FCI role,
under Server Name, right-click the IP address resource and select
Properties. The correct value is under Name on the General tab.

ILBIP The IP address of the internal load balancer. This address is configured
in the Azure portal as the internal load balancer's frontend address.
This is also the IP address of the SQL Server FCI. You can find it in
Failover Cluster Manager, on the same properties page where you
located the value for IPResourceName .

ProbePort The probe port that you configured in the load balancer's health
probe. Any unused TCP port is valid.

SubnetMask The subnet mask for the cluster parameter. It must be the TCP/IP
broadcast address: 255.255.255.255 .

After you set the cluster probe, you can see all the cluster parameters in PowerShell.
Run this script:

PowerShell

Get-ClusterResource $IPResourceName | Get-ClusterParameter

Modify the connection string


For clients that support it, add MultiSubnetFailover=True to the connection string.
Although the MultiSubnetFailover connection option isn't required, it provides the
benefit of a faster subnet failover. This is because the client driver tries to open a TCP
socket for each IP address in parallel. The client driver waits for the first IP address to
respond with success. After the successful response, the client driver uses that IP
address for the connection.

If your client doesn't support the MultiSubnetFailover parameter, you can modify the
RegisterAllProvidersIP and HostRecordTTL settings to prevent connectivity delays upon

failover.
Use PowerShell to modify the RegisterAllProvidersIp and HostRecordTTL settings:

PowerShell

Get-ClusterResource yourFCIname | Set-ClusterParameter


RegisterAllProvidersIP 0

Get-ClusterResource yourFCIname | Set-ClusterParameter HostRecordTTL 300

To learn more, see the documentation about listener connection timeout in SQL Server.

 Tip

Set the MultiSubnetFailover parameter to true in the connection string, even


for HADR solutions that span a single subnet. This setting supports future
spanning of subnets without the need to update connection strings.
By default, clients cache cluster DNS records for 20 minutes. By reducing
HostRecordTTL , you reduce the time to live (TTL) for the cached record. Legacy

clients can then reconnect more quickly. As such, reducing the HostRecordTTL
setting might increase traffic to the DNS servers.

Test failover
Test failover of the clustered resource to validate cluster functionality:

1. Connect to one of the SQL Server cluster nodes by using Remote Desktop Protocol
(RDP).
2. Open Failover Cluster Manager. Select Roles. Notice which node owns the SQL
Server FCI role.
3. Right-click the SQL Server FCI role.
4. Select Move, and then select Best Possible Node.

Failover Cluster Manager shows the role, and its resources go offline. The resources
then move and come back online in the other node.

Test connectivity
To test connectivity, sign in to another virtual machine in the same virtual network. Open
SQL Server Management Studio and connect to the SQL Server FCI name.

7 Note
If you need to, you can download SQL Server Management Studio.

Next steps
To learn more, see:

Windows Server failover cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Overview of failover cluster instances
HADR settings for SQL Server on Azure VMs
Configure a DNN for failover cluster
instance
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

On Azure Virtual Machines, the distributed network name (DNN) routes traffic to the
appropriate clustered resource. It provides an easier way to connect to the SQL Server
failover cluster instance (FCI) than the virtual network name (VNN), without the need for
an Azure Load Balancer.

This article teaches you to configure a DNN resource to route traffic to your failover
cluster instance with SQL Server on Azure VMs for high availability and disaster recovery
(HADR).

For an alternative connectivity option, consider a virtual network name and Azure Load
Balancer instead.

Overview
The distributed network name (DNN) replaces the virtual network name (VNN) as the
connection point when used with an Always On failover cluster instance on SQL Server
VMs. This negates the need for an Azure Load Balancer routing traffic to the VNN,
simplifying deployment, maintenance, and improving failover.

With an FCI deployment, the VNN still exists, but the client connects to the DNN DNS
name instead of the VNN name.

Prerequisites
Before you complete the steps in this article, you should already have:

SQL Server starting with either SQL Server 2019 CU8 and later, SQL Server 2017
CU25 and later, or SQL Server 2016 SP3 and later on Windows Server 2016
and later.
Decided that the distributed network name is the appropriate connectivity option
for your HADR solution.
Configured your failover cluster instances.
Installed the latest version of PowerShell.

Create DNN resource


The DNN resource is created in the same cluster group as the SQL Server FCI. Use
PowerShell to create the DNN resource inside the FCI cluster group.

The following PowerShell command adds a DNN resource to the SQL Server FCI cluster
group with a resource name of <dnnResourceName> . The resource name is used to
uniquely identify a resource. Use one that makes sense to you and is unique across the
cluster. The resource type must be Distributed Network Name .

The -Group value must be the name of the cluster group that corresponds to the SQL
Server FCI where you want to add the distributed network name. For a default instance,
the typical format is SQL Server (MSSQLSERVER) .

PowerShell

Add-ClusterResource -Name <dnnResourceName> `

-ResourceType "Distributed Network Name" -Group "<WSFC role of SQL server


instance>"

For example, to create your DNN resource dnn-demo for a default SQL Server FCI, use the
following PowerShell command:

PowerShell

Add-ClusterResource -Name dnn-demo `

-ResourceType "Distributed Network Name" -Group "SQL Server (MSSQLSERVER)"

Set cluster DNN DNS name


Set the DNS name for the DNN resource in the cluster. The cluster then uses this value
to route traffic to the node that's currently hosting the SQL Server FCI.

Clients use the DNS name to connect to the SQL Server FCI. You can choose a unique
value. Or, if you already have an existing FCI and don't want to update client connection
strings, you can configure the DNN to use the current VNN that clients are already
using. To do so, you need to rename the VNN before setting the DNN in DNS.

Use this command to set the DNS name for your DNN:

PowerShell

Get-ClusterResource -Name <dnnResourceName> | `

Set-ClusterParameter -Name DnsName -Value <DNSName>

The DNSName value is what clients use to connect to the SQL Server FCI. For example, for
clients to connect to FCIDNN , use the following PowerShell command:

PowerShell

Get-ClusterResource -Name dnn-demo | `

Set-ClusterParameter -Name DnsName -Value FCIDNN

Clients will now enter FCIDNN into their connection string when connecting to the SQL
Server FCI.

2 Warning

Do not delete the current virtual network name (VNN) as it is a necessary


component of the FCI infrastructure.

Rename the VNN


If you have an existing virtual network name and you want clients to continue using this
value to connect to the SQL Server FCI, you must rename the current VNN to a
placeholder value. After the current VNN is renamed, you can set the DNS name value
for the DNN to the VNN.

Some restrictions apply for renaming the VNN. For more information, see Renaming an
FCI.
If using the current VNN is not necessary for your business, skip this section. After
you've renamed the VNN, then set the cluster DNN DNS name.

Set DNN resource online


After your DNN resource is appropriately named, and you've set the DNS name value in
the cluster, use PowerShell to set the DNN resource online in the cluster:

PowerShell

Start-ClusterResource -Name <dnnResourceName>

For example, to start your DNN resource dnn-demo , use the following PowerShell
command:

PowerShell

Start-ClusterResource -Name dnn-demo

Configure possible owners


By default, the cluster binds the DNN DNS name to all the nodes in the cluster.
However, nodes in the cluster that are not part of the SQL Server FCI should be excluded
from the list of DNN possible owners.

To update possible owners, follow these steps:

1. Go to your DNN resource in Failover Cluster Manager.

2. Right-click the DNN resource and select Properties.


3. Clear the check box for any nodes that don't participate in the failover cluster
instance. The list of possible owners for the DNN resource should match the list of
possible owners for the SQL Server instance resource. For example, assuming that
Data3 does not participate in the FCI, the following image is an example of
removing Data3 from the list of possible owners for the DNN resource:
4. Select OK to save your settings.

Restart SQL Server instance


Use Failover Cluster Manager to restart the SQL Server instance. Follow these steps:

1. Go to your SQL Server resource in Failover Cluster Manager.


2. Right-click the SQL Server resource, and take it offline.
3. After all associated resources are offline, right-click the SQL Server resource and
bring it online again.

Update connection string


Update the connection string of any application connecting to the SQL Server FCI DNN,
and include MultiSubnetFailover=True in the connection string. If your client does not
support the MultiSubnetFailover parameter, it is not compatible with a DNN.

The following is an example connection string for a SQL FCI DNN with the DNS name of
FCIDNN:

Data Source=FCIDNN, MultiSubnetFailover=True

Additionally, if the DNN is not using the original VNN, SQL clients that connect to the
SQL Server FCI will need to update their connection string to the DNN DNS name. To
avoid this requirement, you can update the DNS name value to be the name of the
VNN. But you'll need to replace the existing VNN with a placeholder first.

Test failover
Test failover of the clustered resource to validate cluster functionality.

To test failover, follow these steps:

1. Connect to one of the SQL Server cluster nodes by using RDP.


2. Open Failover Cluster Manager. Select Roles. Notice which node owns the SQL
Server FCI role.
3. Right-click the SQL Server FCI role.
4. Select Move, and then select Best Possible Node.

Failover Cluster Manager shows the role, and its resources go offline. The resources
then move and come back online in the other node.
Test connectivity
To test connectivity, sign in to another virtual machine in the same virtual network. Open
SQL Server Management Studio and connect to the SQL Server FCI by using the DNN
DNS name.

If you need to, you can download SQL Server Management Studio.

Avoid IP conflict
This is an optional step to prevent the virtual IP (VIP) address used by the FCI resource
from being assigned to another resource in Azure as a duplicate.

Although customers now use the DNN to connect to the SQL Server FCI, the virtual
network name (VNN) and virtual IP cannot be deleted as they are necessary components
of the FCI infrastructure. However, since there is no longer a load balancer reserving the
virtual IP address in Azure, there is a risk that another resource on the virtual network
will be assigned the same IP address as the virtual IP address used by the FCI. This can
potentially lead to a duplicate IP conflict issue.

Configure an APIPA address or a dedicated network adapter to reserve the IP address.

APIPA address
To avoid using duplicate IP addresses, configure an APIPA address (also known as a link-
local address). To do so, run the following command:

PowerShell

Get-ClusterResource "virtual IP address" | Set-ClusterParameter

–Multiple
@{"Address"="169.254.1.1";"SubnetMask"="255.255.0.0";"OverrideAddressMatch"=
1;"EnableDhcp"=0}

In this command, "virtual IP address" is the name of the clustered VIP address resource,
and "169.254.1.1" is the APIPA address chosen for the VIP address. Choose the address
that best suits your business. Set OverrideAddressMatch=1 to allow the IP address to be
on any network, including the APIPA address space.

Dedicated network adapter


Alternatively, configure a network adapter in Azure to reserve the IP address used by the
virtual IP address resource. However, this consumes the address in the subnet address
space, and there is the additional overhead of ensuring the network adapter is not used
for any other purpose.

Limitations
The client connecting to the DNN listener must support the
MultiSubnetFailover=True parameter in the connection string.

There might be more considerations when you're working with other SQL Server
features and an FCI with a DNN. For more information, see FCI with DNN
interoperability.

Next steps
To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
HADR settings for SQL Server on Azure VMs
Feature interoperability with SQL Server
FCI & DNN
Article • 03/14/2023

Applies to:
SQL Server on Azure VM

 Tip

There are many methods to deploy an availability group. Simplify your


deployment and eliminate the need for an Azure Load Balancer or distributed
network name (DNN) for your Always On availability group by creating your SQL
Server virtual machines (VMs) in multiple subnets within the same Azure virtual
network. If you've already created your availability group in a single subnet, you
can migrate it to a multi-subnet environment.

There are certain SQL Server features that rely on a hard-coded virtual network name
(VNN). As such, when using the distributed network name (DNN) resource with your
failover cluster instance and SQL Server on Azure VMs, there are some additional
considerations.

In this article, learn how to configure the network alias when using the DNN resource, as
well as which SQL Server features require additional consideration.

Create network alias (FCI)


Some server-side components rely on a hard-coded VNN value, and require a network
alias that maps the VNN to the DNN DNS name to function properly.
Follow the steps in
Create a server alias to create an alias that maps the VNN to the DNN DNS name.

For a default instance, you can map the VNN to the DNN DNS name directly, such that
VNN = DNN DNS name.
For example, if VNN name is FCI1 , instance name is
MSSQLSERVER , and the DNN is FCI1DNN (clients previously connected to FCI , and now
they connect to FCI1DNN ) then map the VNN FCI1 to the DNN FCI1DNN .

For a named instance the network alias mapping should be done for the full instance,
such that VNN\Instance = DNN\Instance .
For example, if VNN name is FCI1 , instance
name is instA , and the DNN is FCI1DNN (clients previously connected to FCI1\instA ,
and now they connect to FCI1DNN\instaA ) then map the VNN FCI1\instaA to the DNN
FCI1DNN\instaA .
Client drivers
For ODBC, OLEDB, ADO.NET, JDBC, PHP, and Node.js drivers, users need to explicitly
specify the DNN DNS name as the server name in the connection string. To ensure rapid
connectivity upon failover, add MultiSubnetFailover=True to the connection string if the
SQL client supports it.

Tools
Users of SQL Server Management Studio, sqlcmd, Azure Data Studio, and SQL Server
Data Tools need to explicitly specify the DNN DNS name as the server name in the
connection string.

Availability groups and FCI


You can configure an Always On availability group by using a failover cluster instance as
one of the replicas. In this configuration, the mirroring endpoint URL for the FCI replica
needs to use the FCI DNN. Likewise, if the FCI is used as a read-only replica, the read-
only routing to the FCI replica needs to use the FCI DNN.

The format for the mirroring endpoint is: ENDPOINT_URL = 'TCP://<DNN DNS name>:
<mirroring endpoint port>' .

For example, if your DNN DNS name is dnnlsnr , and 5022 is the port of the FCI's
mirroring endpoint, the Transact-SQL (T-SQL) code snippet to create the endpoint URL
looks like:

SQL

ENDPOINT_URL = 'TCP://dnnlsnr:5022'

Likewise, the format for the read-only routing URL is: TCP://<DNN DNS name>:<SQL Server
instance port> .

For example, if your DNN DNS name is dnnlsnr , and 1444 is the port used by the read-
only target SQL Server FCI, the T-SQL code snippet to create the read-only routing URL
looks like:

SQL

READ_ONLY_ROUTING_URL = 'TCP://dnnlsnr:1444'

You can omit the port in the URL if it is the default 1433 port. For a named instance,
configure a static port for the named instance and specify it in the read-only routing
URL.

Replication
Replication has three components: Publisher, Distributor, Subscriber. Any of these
components can be a failover cluster instance. Because the FCI VNN is heavily used in
replication configuration, both explicitly and implicitly, a network alias that maps the
VNN to the DNN might be necessary for replication to work.

Keep using the VNN name as the FCI name within replication, but create a network alias
in the following remote situations before you configure replication:

Replication component (FCI Remote Network alias map Server with


with DNN) component network map

Publisher Distributor Publisher VNN to Distributor


Publisher DNN

Distributor Subscriber Distributor VNN to Subscriber


Distributor DNN

Distributor Publisher Distributor VNN to Publisher


Distributor DNN

Subscriber Distributor Subscriber VNN to Distributor


Subscriber DNN

For example, assume you have a Publisher that's configured as an FCI using DNN in a
replication topology, and the Distributor is remote. In this case, create a network alias on
the Distributor server to map the Publisher VNN to the Publisher DNN:
Use the full instance name for a named instance, like the following image example:

Database mirroring
You can configure database mirroring with an FCI as either database mirroring partner.
Configure it by using Transact-SQL (T-SQL) rather than the SQL Server Management
Studio GUI. Using T-SQL will ensure that the database mirroring endpoint is created
using the DNN instead of the VNN.

For example, if your DNN DNS name is dnnlsnr , and the database mirroring endpoint is
7022, the following T-SQL code snippet configures the database mirroring partner:

SQL

ALTER DATABASE AdventureWorks

SET PARTNER =

'TCP://dnnlsnr:7022'

GO

For client access, the Failover Partner property can handle database mirroring failover,
but not FCI failover.

MSDTC
The FCI can participate in distributed transactions coordinated by Microsoft Distributed
Transaction Coordinator (MSDTC). Clustered MSDTC and local MSDTC are supported
with FCI DNN. In Azure, an Azure Load Balancer is necessary for a clustered MSDTC
deployment.

 Tip

The DNN defined in the FCI does not replace the Azure Load Balancer requirement
for the clustered MSDTC.

FileStream
Though FileStream is supported for a database in an FCI, accessing FileStream or
FileTable by using File System APIs with DNN is not supported.

Linked servers
Using a linked server with an FCI DNN is supported. Either use the DNN directly to
configure a linked server, or use a network alias to map the VNN to the DNN.

For example, to create a linked server with DNN DNS name dnnlsnr for named instance
insta1 , use the following Transact-SQL (T-SQL) command:

SQL

USE [master]

GO

EXEC master.dbo.sp_addlinkedserver

@server = N'dnnlsnr\inst1',

@srvproduct=N'SQL Server' ;

GO

Alternatively, you can create the linked server using the virtual network name (VNN)
instead, but you will then need to define a network alias to map the VNN to the DNN.

For example, for instance name insta1 , VNN name vnnname , and DNN name dnnlsnr ,
use the following Transact-SQL (T-SQL) command to create a linked server using the
VNN:

SQL

USE [master]

GO

EXEC master.dbo.sp_addlinkedserver

@server = N'vnnname\inst1',

@srvproduct=N'SQL Server' ;

GO

Then, create a network alias to map vnnname\insta1 to dnnlsnr\insta1 .

Frequently asked questions


Which SQL Server version brings DNN support?

SQL Server 2019 CU2 and later.

What is the expected failover time when DNN is used?

For DNN, the failover time will be just the FCI failover time, without any time added
(like probe time when you're using Azure Load Balancer).

Is there any version requirement for SQL clients to support DNN with OLEDB and
ODBC?

We recommend MultiSubnetFailover=True connection string support for DNN. It's


available starting with SQL Server 2012 (11.x).

Are any SQL Server configuration changes required for me to use DNN?

SQL Server does not require any configuration change to use DNN, but some SQL
Server features might require more consideration.

Does DNN support multiple-subnet clusters?

Yes. The cluster binds the DNN in DNS with the physical IP addresses of all nodes
in the cluster regardless of the subnet. The SQL client tries all IP addresses of the
DNS name regardless of the subnet.

Next steps
To learn more, see:

Windows Server Failover Cluster with SQL Server on Azure VMs


Failover cluster instances with SQL Server on Azure VMs
Failover cluster instance overview
HADR settings for SQL Server on Azure VMs
Azure PowerShell Documentation
Official product documentation for Azure PowerShell. Azure PowerShell is a collection of
modules for managing Azure resources from PowerShell.

About Azure PowerShell

e OVERVIEW

Get started

What is Azure PowerShell?

Introducing the Az PowerShell module

Support Lifecycle

d TRAINING

Automate Azure tasks from PowerShell

Choose the best Azure command line tools for managing and provisioning your cloud
infrastructure

Installation

a DOWNLOAD

Install

Install - Windows

Install - Linux

Install - macOS

Run in Azure Cloud Shell

Azure PowerShell in Docker

What's new

h WHAT'S NEW
Release notes

Az 10.0.0 migration guide

Upcoming breaking changes

Azure PowerShell Reference

i REFERENCE

Cmdlet reference

Identity and Authentication

h WHAT'S NEW

Azure AD to Microsoft Graph Migration changes

c HOW-TO GUIDE

Authentication methods

Create a service principal

Credential Contexts

Concepts

c HOW-TO GUIDE

Manage subscriptions

Manage Azure resources with Invoke-AzRestMethod

Filter cmdlet results

Format output

PowerShell jobs

g TUTORIAL
Create virtual machines

Configuration

c HOW-TO GUIDE

Configure global settings

Intelligent command completion

Use the Az PowerShell module behind a proxy

Deploy

` DEPLOY

Deploy resource manager templates

Export resource manager templates

Deploy private resource manager templates

Samples

s SAMPLE

Azure App Service

SQL databases

Cosmos DB

Samples repo

Migrate from AzureRM

e OVERVIEW

Changes between AzureRM and Az


c HOW-TO GUIDE

Migration steps

f QUICKSTART

Automatically migrate PowerShell scripts

Help & Support

e OVERVIEW

Report product issues

Troubleshoot

Follow Azure PowerShell on Twitter

Azure Tools Blog


SQL
Reference

Commands
az sql Manage Azure SQL Databases and Data Warehouses.
Transact-SQL reference (Database
Engine)
Article • 07/12/2023

Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW) SQL Endpoint in
Microsoft Fabric Warehouse in Microsoft Fabric

This article gives the basics about how to find and use the Microsoft Transact-SQL (T-
SQL) reference articles. T-SQL is central to using Microsoft SQL products and services. All
tools and applications that communicate with a SQL Server database do so by sending
T-SQL commands.

T-SQL compliance with the SQL standard


For detailed technical documents about how certain standards are implemented in SQL
Server, see the Microsoft SQL Server Standards Support documentation.

Tools that use T-SQL


Some of the Microsoft tools that issue T-SQL commands are:

SQL Server Management Studio (SSMS)


Azure Data Studio
SQL Server Data Tools (SSDT)
sqlcmd

Locate the Transact-SQL reference articles


To find T-SQL articles, use search at the top right of this page, or use the table of
contents on the left side of the page. You can also type a T-SQL key word in the
Management Studio Query Editor window, and press F1.

Find system views


To find the system tables, views, functions, and procedures, see these links, which are in
the Using relational databases section of the SQL documentation.

System catalog Views


System compatibility views
System dynamic management views
System functions
System information schema views
System stored procedures
System tables

"Applies to" references


The T-SQL reference articles encompass multiple versions of SQL Server, starting with
2008, and the other Azure SQL services. Near the top of each article, is a section that
indicates which products and services support subject of the article.

For example, this article applies to all versions, and has the following label.

Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW)

Another example, the following label indicates an article that applies only to Azure
Synapse Analytics and Parallel Data Warehouse.

Applies to: Azure Synapse Analytics Analytics Platform System (PDW)

In some cases, the article is used by a product or service, but all of the arguments aren't
supported. In this case, other Applies to sections are inserted into the appropriate
argument descriptions in the body of the article.

Get help from Microsoft Q & A


For online help, see the Microsoft Q & A Transact-SQL Forum.

See other language references


The SQL docs include these other language references:

XQuery Language Reference


Integration Services Language Reference
Replication Language Reference
Analysis Services Language Reference

Next steps
Tutorial: Writing Transact-SQL Statements
Transact-SQL Syntax Conventions (Transact-SQL)
Connection modules for Microsoft SQL
Database
Article • 07/19/2023

This article provides download links to connection modules or drivers that your client
programs can use for interacting with Microsoft SQL Server, and with its twin in the
cloud Azure SQL Database. Drivers are available for a variety of programming
languages, running on the following operating systems:

Linux
macOS
Windows

OOP-to-relational mismatch:

Relational: Client programs that are written in an object-oriented programming (OOP)


language often use SQL drivers, which return queried data in a format that is more
relational than object oriented. C# using ADO.NET is one example. The OOP-relational
format mismatch sometimes makes the OOP code harder to write and understand.

ORM: Other drivers or frameworks return queried data in the OOP format, avoiding the
mismatch. These drivers work by expecting that classes have been defined to match the
data columns of particular SQL tables. The driver then performs the object-relational
mapping (ORM) to return queried data as an instance of a class. Microsoft's Entity
Framework (EF) for C#, and Hibernate for Java, are two examples.

The present article devotes separate sections to these two kinds of connection drivers.

Drivers for relational access


Language Download the SQL driver

C# ADO.NET
Microsoft.Data.SqlClient
.NET Core for: Linux-Ubuntu, macOS, Windows
Entity Framework Core
Entity Framework

C++ ODBC

OLE DB
Language Download the SQL driver

Go Go MSSQL driver, install instructions


Go download page

Java JDBC

Node.js Node.js driver, install instructions

PHP PHP

Python pyodbc, install instructions


Download ODBC

Ruby Ruby driver, install instructions


Ruby download page

Drivers for ORM access


The following table lists examples of Object Relational Mapping (ORM) frameworks that
client applications use to connect to Microsoft SQL Database.

Language ORM driver download

C# Entity Framework Core


Entity Framework (6.x or later)

Go GORM

Java Hibernate ORM

PHP Eloquent ORM, included in Laravel install

Node.js Sequelize ORM


Prisma

Python Django
SQL Server backend for Django

Ruby Ruby on Rails

Build-an-app webpages
https://aka.ms/sqldev takes you to a set of Build-an-app webpages. The webpages
provide information about numerous combinations of programming language,
operating system, and SQL connection driver. Among the information provided by the
Build-an-app webpages are the following items:
Details about how to get started from the very beginning, for each combination of
language + operating system + driver.
Instructions for installing the latest SQL connection drivers.
Code examples for each of the following items:
Object-relational code examples.
ORM code examples.
Columnstore index demonstrations for much faster performance.

First page, of Build-an-app webpages:


Menu for Java - Ubuntu, of Build-an-app webpages

Related links
Code examples for connecting to Azure SQL Database in the cloud, with Java and
other languages.
Frequently asked questions for
SQL Server on Azure VMs
FAQ

Applies to: SQL Server on Azure VM

This article provides answers to some of the most common questions about running
SQL Server on Windows Azure Virtual Machines (VMs) .

If your Azure issue is not addressed in this article, visit the Azure forums on Microsoft Q
& A and Stack Overflow . You can post your issue in these forums, or post to
@AzureSupport on Twitter . You also can submit an Azure support request. To submit a
support request, on the Azure support page, select Get support.

Images
What SQL Server virtual machine gallery images
are available?
Azure maintains virtual machine images for all supported major releases of SQL Server
on all editions for both Windows and Linux. For more information, see the complete list
of Windows VM images and Linux VM images.

Are existing SQL Server virtual machine gallery


images updated?
Every two months, SQL Server images in the virtual machine gallery are updated with
the latest Windows and Linux updates. For Windows images, this includes any updates
that are marked important in Windows Update, including important SQL Server security
updates and service packs. For Linux images, this includes the latest system updates.
SQL Server cumulative updates are handled differently for Linux and Windows. For
Linux, SQL Server cumulative updates are also included in the refresh. But at this time,
Windows VMs aren't updated with SQL Server or Windows Server cumulative updates.

Can SQL Server virtual machine images get


removed from the gallery?
Yes. Azure only maintains one image per major version and edition. For example, when a
new SQL Server service pack is released, Azure adds a new image to the gallery for that
service pack. The SQL Server image for the previous service pack is immediately
removed from the Azure portal. However, it is still available for provisioning from
PowerShell for the next three months. After three months, the previous service pack
image is no longer available. This removal policy would also apply if a SQL Server
version becomes unsupported when it reaches the end of its lifecycle.

Is it possible to deploy an older image of SQL


Server that is not visible in the Azure portal?
Yes, by using PowerShell. For more information about deploying SQL Server VMs using
PowerShell, see How to provision SQL Server virtual machines with Azure PowerShell.

Is it possible to create a generalized Azure


Marketplace SQL Server image of my SQL Server
VM and use it to deploy VMs?
Yes, but you must then register each SQL Server VM with the SQL IaaS Agent extension
to manage your SQL Server VM in the portal, as well as utilize features such as
automated patching and automatic backups. When registering with the extension, you
will also need to specify the license type for each SQL Server VM.

How do I generalize SQL Server on Azure VM


and use it to deploy new VMs?
You can deploy a Windows Server VM (without SQL Server installed on it) and use the
SQL sysprep process to generalize SQL Server on Azure VM (Windows) with the SQL
Server installation media. Customers who have Software Assurance can obtain their
installation media from the Volume Licensing Center . Customers who don't have
Software Assurance can use the setup media from an Azure Marketplace SQL Server VM
image that has the desired edition.

Alternatively, use one of the SQL Server images from Azure Marketplace to generalize
SQL Server on Azure VM. Note that you must delete the following registry key in the
source image before creating your own image. Failure to do so can result in the bloating
of the SQL Server setup bootstrap folder and/or SQL IaaS Agent extension in failed
state.
Registry Key path:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\SysPrep

External\Specialize

7 Note

SQL Server on Azure VMs, including those deployed from custom generalized
images, should be registered with the SQL IaaS Agent extension to meet
compliance requirements and to utilize optional features such as automated
patching and automatic backups. The extension also allows you to specify the
license type for each SQL Server VM.

Can I use my own VHD to deploy a SQL Server


VM?
Yes, but you must then register each SQL Server VM with the SQL IaaS Agent extension
to manage your SQL Server VM in the portal, as well as utilize features such as
automated patching and automatic backups.

Is it possible to set up configurations not shown


in the virtual machine gallery (for example
Windows 2008 R2 + SQL Server 2012)?
No. For virtual machine gallery images that include SQL Server, you must select one of
the provided images either through the Azure portal or via PowerShell. However, you
have the ability to deploy a Windows VM and self-install SQL Server to it. You must then
register your SQL Server VM with the SQL IaaS Agent extension to manage your SQL
Server VM in the Azure portal, as well as utilize features such as automated patching
and automatic backups.

I can't find the version and edition of SQL Server


that I want from the images available on Azure
Marketplace.
If the version and edition of SQL Server you're looking for isn't available in the Images
drop-down on Azure Marketplace, deploy a Windows-only Azure virtual machine, and
then manually install the version and edition of SQL Server you want. Register your SQL
Server VM with the SQL IaaS Agent extension if you want to manage your SQL Server
VM from the Azure portal.

I cannot find the version of Windows, such as


Azure Edition, among the SQL Server images
available on Azure Marketplace.
If the version of Windows you're looking for isn't available in the SQL Server images
found in the Images drop-down of Azure Marketplace, deploy a Windows-only Azure
virtual machine with the desired edition, and then manually install the version and
edition of SQL Server you want. Register your SQL Server VM with the SQL IaaS Agent
extension if you want to manage your SQL Server VM from the Azure portal.

Is there a free edition of SQL Server available on


Azure Marketplace?
Developer and Express editions of SQL Server are available on Azure Marketplace, which
do not charge you for the SQL Server license. If the Express or Developer edition isn't
available for the version of SQL Server you're looking for, deploy a Windows-only Azure
virtual machine, and then manually install the version and edition of SQL Server you
want. Register your SQL Server VM with the SQL IaaS Agent extension if you want to
manage your SQL Server VM from the Azure portal.

Creation
How do I create an Azure virtual machine with
SQL Server?
The easiest method is to create a virtual machine that includes SQL Server. For a tutorial
on signing up for Azure and creating a SQL Server VM from the portal, see Provision a
SQL Server virtual machine in the Azure portal. You can select a virtual machine image
that uses pay-per-second SQL Server licensing, or you can use an image that allows you
to bring your own SQL Server license. You also have the option of manually installing
SQL Server on a VM with either a freely licensed edition (Developer or Express) or by
reusing an on-premises license. Be sure to register your SQL Server VM with the SQL
IaaS Agent extension to manage your SQL Server VM in the portal, as well as utilize
features such as automated patching and automatic backups. If you bring your own
license, you must have License Mobility through Software Assurance on Azure . For
more information, see Pricing guidance for SQL Server Azure VMs.

How can I migrate my on-premises SQL Server


database to the cloud?
First create an Azure virtual machine with a SQL Server instance. Then migrate your on-
premises databases to that instance. For data migration strategies, see Migration guide:
SQL Server to SQL Server on Azure Virtual Machines.

Licensing
How can I install my licensed copy of SQL Server
on an Azure VM?
There are three ways to do this. If you're an Enterprise Agreement (EA) customer, you
can provision one of the virtual machine images. If you have Software Assurance , you
can enable the Azure Hybrid Benefit on an existing pay-as-you-go (PAYG) image. Or you
can copy the SQL Server installation media to a Windows Server VM, and then install
SQL Server on the VM. Be sure to register your SQL Server VM with the extension for
features such as portal management, automated backup and automated patching.

Does a customer need SQL Server Client Access


Licenses (CALs) to connect to a SQL Server pay-
as-you-go image that is running on Azure Virtual
Machines?
No. Customers need CALs when they use bring-your-own-license and move their SQL
Server SA server / CAL VM to Azure VMs.

Can I change a VM to use my own SQL Server


license if it was created from one of the pay-as-
you-go gallery images?
Yes. You can easily switch a pay-as-you-go (PAYG) gallery image to bring-your-own-
license (BYOL) by enabling the Azure Hybrid Benefit . For more information, see How
to change the licensing model for a SQL Server VM. Currently, this facility is only
available for public and Azure Government cloud customers.

Will switching licensing models require any


downtime for SQL Server?
No. Changing the licensing model does not require any downtime for SQL Server as the
change is effective immediately and does not require a restart of the VM.

Is it possible to switch licensing models on a SQL


Server VM deployed using classic model?
No. Changing licensing models is not supported on a classic VM. You may migrate your
VM to the Azure Resource Manager model and register with the SQL IaaS Agent
extension. Once the VM is registered with the SQL IaaS Agent extension, licensing
model changes will be available on the VM.

Can I use the Azure portal to manage multiple


instances on the same VM?
No. Portal management is a feature provided by the SQL IaaS Agent extension, which
relies on the SQL Server IaaS Agent extension. As such, the same limitations apply to the
extension as to the extension. The portal can either only manage one default instance,
or one named instance, as long as it was configured correctly. For more information on
these limitations, see SQL Server IaaS agent extension.

Can CSP subscriptions activate the Azure Hybrid


Benefit?
Yes, Azure Cloud Solution Provider (CSP) customers can use the Azure Hybrid Benefit by
first deploying a pay-as-you-go VM and then converting it to bring-your-own-license, if
they have active Software Assurance.

Do I have to pay to license SQL Server on an


Azure VM if it is only being used for
standby/failover?
To have a free passive license for a standby secondary availability group or failover
clustered instance, you must meet all of the following criteria as outlined by the Product
Licensing Terms :

1. You have license mobility through Software Assurance .


2. The passive SQL Server instance does not serve SQL Server data to clients or run
active SQL Server workloads. It is only used to synchronize with the primary server
and otherwise maintain the passive database in a warm standby state. If it is
serving data, such as reports to clients running active SQL Server workloads, or
performing any work other than what is specified in the product terms, it must be
a paid licensed SQL Server instance. The following activity is permitted on the
secondary instance: database consistency checks or CheckDB, full backups,
transaction log backups, and monitoring resource usage data. You may also run
the primary and corresponding disaster recovery instance simultaneously for brief
periods of disaster recovery testing every 90 days.
3. The active SQL Server license is covered by Software Assurance and allows for one
passive secondary SQL Server instance, with up to the same amount of compute as
the licensed active server, only.
4. The secondary SQL Server VM utilizes the Disaster Recovery license in the Azure
portal.

What is considered a passive instance?


The passive SQL Server instance does not serve SQL Server data to clients or run active
SQL Server workloads. It is only used to synchronize with the primary server and
otherwise maintain the passive database in a warm standby state. If it is serving data,
such as reports to clients running active SQL Server workloads, or performing any work
other than what is specified in the product terms, it must be a paid licensed SQL Server
instance. The following activity is permitted on the secondary instance: database
consistency checks or CheckDB, full backups, transaction log backups, and monitoring
resource usage data. You may also run the primary and corresponding disaster recovery
instance simultaneously for brief periods of disaster recovery testing every 90 days.

What scenarios can utilize the Disaster Recovery


(DR) benefit?
The licensing guide provides scenarios in which the Disaster Recovery Benefit can be
utilized. Refer to your Product Terms and talk to your licensing contacts or account
manager for more information.
Which subscriptions support the Disaster
Recovery (DR) benefit?
Comprehensive programs that offer Software Assurance equivalent subscription rights
as a fixed benefit support the DR benefit. This includes. but is not limited to, the Open
Value (OV), Open Value Subscription (OVS), Enterprise Agreement (EA), Enterprise
Agreement Subscription (EAS), and the Server and Cloud Enrollment (SCE). Refer to the
product terms and talk to your licensing contacts or account manager for more
information.

Administration
Can I install a second instance of SQL Server on
the same VM? Can I change installed features of
the default instance?
Yes. The SQL Server installation media is located in a folder on the C drive. Run
Setup.exe from that location to add new SQL Server instances or to change other
installed features of SQL Server on the machine. Note that some features, such as
Automated Backup, Automated Patching, and Azure Key Vault Integration, only operate
against the default instance, or a named instance that was configured properly (See
Question 3). Customers using Software Assurance through the Azure Hybrid Benefit or
the pay-as-you-go licensing model can install multiple instances of SQL Server on the
virtual machine without incurring extra licensing costs. Additional SQL Server instances
may strain system resources unless configured correctly.

What is the maximum number of instances on a


VM?
SQL Server 2012 to SQL Server 2019 can support 50 instances on a stand-alone server.
This is the same limit regardless of in Azure on-premises. See best practices to learn how
to better prepare your environment.

Microsoft Visual C++ Redistributable installed


with SQL Server is flagged as end of life or
obsolete
When you provision SQL Server on Azure VM, the SQL Server setup program installs a
Microsoft Visual C++ Redistributable which is required for SQL Server components to
run properly. Your security software may send alerts about end of life (EOL) or obsolete
software components due to the version of the Microsoft Visual C++ Redistributable
components that was installed by SQL Server, particularly for older versions of SQL
Server (SQL Server 2016 and earlier). According to the support lifecycle policy, Microsoft
Visual C++ Redistributable components are supported as long as the product that
installed them is supported. As long as your installed version of SQL Server is still
supported, you can safely ignore this warning. We recommend not removing VC++ as it
may break some SQL Server functionality.

Can I uninstall the default instance of SQL


Server?
Yes, but there are some considerations. First, SQL Server-associated billing may continue
to occur depending on the license model for the VM. Second, as stated in the previous
answer, there are features that rely on the SQL Server IaaS Agent Extension. If you
uninstall the default instance without removing the IaaS extension also, the extension
continues to look for the default instance and may generate event log errors. These
errors are from the following two sources: Microsoft SQL Server Credential
Management and Microsoft SQL Server IaaS Agent. One of the errors might be similar
to the following:

A network-related or instance-specific error occurred while establishing a connection to


SQL Server. The server was not found or was not accessible.

If you do decide to uninstall the default instance, also uninstall the SQL Server IaaS
Agent Extension as well.

Can I use a named instance of SQL Server with


the IaaS extension?
Yes, if the named instance is the only instance on the SQL Server, and if the original
default instance was uninstalled properly. If there is no default instance and there are
multiple named instances on a single SQL Server VM, the SQL Server IaaS agent
extension will fail to install.

Can I remove SQL Server and the associated


license billing from a SQL Server VM?
Yes, but you'll need to take additional steps to avoid being charged for your SQL Server
instance as described in Pricing guidance. If you want to completely remove the SQL
Server instance, you can migrate to another Azure VM without SQL Server pre-installed
on the VM and delete the current SQL Server VM. If you want to keep the VM but stop
SQL Server billing, follow these steps:

1. Back up all of your data, including system databases, if necessary.


2. Uninstall SQL Server completely, including the SQL IaaS Agent extension (if
present).
3. Install the free SQL Express edition .
4. Register with the SQL IaaS Agent extension.
5. Change the edition of SQL Server in the Azure portal to Express to stop billing.
6. (optional) Disable the Express SQL Server service by disabling service startup.

Can I use the Azure portal to manage multiple


instances on the same VM?
No. Portal management is provided by the SQL IaaS Agent extension, which relies on
the SQL Server IaaS Agent extension. As such, the same limitations apply to the portal as
the extension. The portal can either only manage one default instance, or one named
instance as long as it's configured correctly. For more information, see SQL Server IaaS
Agent extension

Is Azure Active Directory Domain Services (Azure


AD DS) supported with SQL Server on Azure
VMs?
No. Using Azure Active Directory Domain Services (Azure AD DS) is not currently
supported with SQL Server on Azure VMs. Use an Active Directory domain account
instead.

Updating and patching


How do I change to a different version/edition of
SQL Server in an Azure VM?
Customers can change their version/edition of SQL Server by using setup media that
contains their desired version or edition of SQL Server. Once the edition has been
changed, use the Azure portal to modify the edition property of the VM to accurately
reflect billing for the VM. For more information, see change edition of a SQL Server VM.
There is no billing difference for different versions of SQL Server, so once the version of
SQL Server has been changed, no further action is needed.

How do I get the SQL Server installation media?


For SQL Server VMs deployed through Azure Marketplace, the installation media is at
C:\SQLServerFull . Run Setup.exe from that location to add new SQL Server instances or

to change other installed features of SQL Server on the machine. You can also copy this
setup media to other virtual machines to install, or upgrade, that same version and
edition of SQL Server. Customers who have Software Assurance can obtain their
installation media from the Volume Licensing Center .

How are updates and service packs applied on a


SQL Server VM?
Virtual machines give you control over the host machine, including when and how you
apply updates. For the operating system, you can manually apply windows updates, or
you can enable a scheduling service called Automated Patching. Automated Patching
installs any updates that are marked important, including SQL Server updates in that
category. Other optional updates to SQL Server must be installed manually.

Can I upgrade my SQL Server instance after


registering it with the SQL IaaS Agent extension?
If the OS is Windows Server 2008 R2 or later, yes. You can use any setup media to
upgrade the version and edition of SQL Server, and then you can register with the SQL
IaaS Agent extension. Doing so gives you access to all the benefits of the SQL IaaS
Agent extension such as portal manageability, automated backups, and automated
patching. If the OS version is Windows Server 2008, the extension is only supported with
limited functionality.

How can I get free extended security updates for


my end of support instances?
You can get free extended security updates by moving your SQL Server as-is to an Azure
virtual machine. Updates are available through the Windows Update channel. For more
information, see end of support options.
General
Are SQL Server failover cluster instances (FCI)
supported on Azure VMs?
Yes. You can configure a failover cluster instance using Azure shared disks, premium file
shares (PFS), or storage spaces direct (S2D) for the storage subsystem. Premium file
shares provide IOPS and throughput capacities that meet the needs of many workloads.
For IO-intensive workloads, consider using storage spaces direct based on managed
premium or ultra-disks. Alternatively, you can use third-party clustering or storage
solutions as described in High availability and disaster recovery for SQL Server on Azure
Virtual Machines.

) Important

SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal management.
Review feature benefits to learn more.

What is the difference between SQL Server VMs


and the SQL Database service?
Conceptually, running SQL Server on an Azure virtual machine is not that different from
running SQL Server in a remote datacenter. In contrast, Azure SQL Database offers
database-as-a-service. With SQL Database, you do not have access to the machines that
host your databases. For a full comparison, see Choose a cloud SQL Server option: Azure
SQL (PaaS) Database or SQL Server on Azure VMs (IaaS).

How do I install SQL Data tools on my Azure


VM?
Download and install the SQL Data tools from Microsoft SQL Server Data Tools -
Business Intelligence for Visual Studio 2013 .

Are distributed transactions with MSDTC


supported on SQL Server VMs?
Yes. Local DTC is supported for SQL Server 2016 SP2 and greater. However, applications
must be tested when utilizing Always On availability groups, as transactions in-flight
during a failover will fail and must be retried. Clustered DTC is available starting with
Windows Server 2019.

Does Azure SQL virtual machine move or store


customer data out of region?
No. In fact, Azure SQL virtual machine and the SQL IaaS Agent extension do not store
any customer data. Review the SQL IaaS Agent extension privacy statements to learn
more.

What Azure Load Balancer SKU should be used


for a cross-cluster migration of an availability
group?
To perform a cross-cluster migration of an availability group on SQL Server on Azure
VMs, use the standard Azure Load Balancer SKU.

Can I use Azure premium file share to host my


database files on a standalone instance of SQL
Server?
Yes. Azure premium file shares are supported for both failover cluster instances and
standalone instances of SQL Server using the SMB protocol.

SQL Server IaaS Agent extension


Should I register my SQL Server VM provisioned
from a SQL Server image in Azure Marketplace?
No. Microsoft automatically registers VMs provisioned from the SQL Server images in
Azure Marketplace. Registering with the extension is required only if the VM was not
provisioned from the SQL Server images in Azure Marketplace and SQL Server was self-
installed.
Is the SQL IaaS Agent extension available for all
customers?
Yes. Customers should register their SQL Server VMs with the extension if they did not
use a SQL Server image from Azure Marketplace and instead self-installed SQL Server, or
if they brought their custom VHD. VMs owned by all types of subscriptions (Direct,
Enterprise Agreement, and Cloud Solution Provider) can register with the SQL IaaS
Agent extension.

What are the prerequisites to register with the


SQL IaaS Agent extension?
Check the prerequisites for details.

What Azure permissions are necessary to


register with the extension?
The client credentials used to register the virtual machine should exist in any of the
following Azure roles - Virtual Machine contributor, Contributor, or Owner.

Will registering with the SQL IaaS Agent


extension install an agent on my VM?
Not initially. When you first register with the SQL IaaS Agent extension, binaries are
copied to the SQL Server VM providing you limited functionality. Once you enable a
feature that relies on it, the SQL IaaS Agent is installed to the VM. Check the table of
benefits for information about limited functionality.

What permissions does the SQL Server IaaS


agent extension use?
October 2022 introduced the least privilege permissions model for the extension,
granting minimal permissions necessary for each feature used by the extension. SQL
Server VMs deployed after October 2022 via Azure Marketplace have the least privilege
permissions model enabled by default. The extension uses sysadmin rights for SQL
Server VMs that were deployed prior to October 2022, or self-installed SQL Server VMs,
that have not manually enabled the least privilege model in the Azure portal. Review
SQL IaaS Agent extension permissions to learn more.
Why do I see SQL virtual machines resource in
the Azure portal? Who created it? Do I get billed
for this?
The SQL virtual machines resource is a free resource that allows you to manage your
SQL Server VM from the Azure portal. The SQL virtual machines resource is created
when you deploy a SQL Server VM image from Azure Marketplace, or manually register
a SQL Server VM with the SQL IaaS Agent extension. Azure can also create this resource
automatically for existing VMs if a SQL Server instance is detected. There is no cost
associated with SQL virtual machines resource.

Will registering with the SQL IaaS Agent


extension restart SQL Server on my VM?
No, starting September 2021, restarting the SQL Server service is no longer required
when registering with the SQL IaaS Agent extension.

Can I register with the SQL IaaS Agent extension


without specifying the SQL Server license type?
No. The SQL Server license type is not an optional property when you're registering with
the SQL IaaS Agent extension. You have to set the SQL Server license type as pay-as-
you-go or Azure Hybrid Benefit when registering with the SQL IaaS Agent extension. If
you have any of the free versions of SQL Server installed, such as Developer or
Evaluation edition, you must register with pay-as-you-go licensing. Azure Hybrid Benefit
is only available for paid versions of SQL Server such as Enterprise and Standard
editions.

What is the default license type when using the


automatic registration feature?
The license type automatically defaults to that of the VM image. If you use a pay-as-
you-go image for your VM, then your license type will be PAYG , otherwise your license
type will be AHUB by default.

Is it possible to register self-deployed SQL Server


VMs with the SQL IaaS Agent extension?
Yes. If you deployed SQL Server from your own media, and installed the SQL IaaS Agent
extension you can register your SQL Server VM with the extension to get the
manageability benefits provided by the SQL IaaS Agent extension.

Is it possible to repair the SQL IaaS Agent


extension?
Yes. Navigate to the SQL virtual machines resource for your SQL Server VM, and choose
Repair under Support & troubleshooting to open the repair page and repair the
extension.

Can I register with the SQL IaaS Agent extension


from the Azure portal?
No. Registering a single VM with the SQL IaaS Agent extension is not available in the
Azure portal. Registering with the SQL IaaS Agent extension is only supported with the
Azure CLI or Azure PowerShell.

Can I register a VM with the SQL IaaS Agent


extension before SQL Server is installed?
No. A VM must have at least one SQL Server (Database Engine) instance to successfully
register with the SQL IaaS Agent extension. If there is no SQL Server instance on the VM,
the new Microsoft.SqlVirtualMachine resource will be in a failed state.

Can I register a VM with the SQL IaaS Agent


extension if there are multiple SQL Server
instances?
Yes, provided there is a default instance on the VM. The SQL IaaS Agent extension will
register only one SQL Server (Database Engine) instance. The SQL IaaS Agent extension
will register the default SQL Server instance in the case of multiple instances.

Can I register a SQL Server failover cluster


instance with the SQL IaaS Agent extension?
Yes. SQL Server failover cluster instances on an Azure VM can be registered with the SQL
IaaS Agent extension with limited functionality.
Can I register my VM with the SQL IaaS Agent
extension if an Always On availability group is
configured?
Yes. There are no restrictions to registering a SQL Server instance on an Azure VM with
the SQL IaaS Agent extension if you're participating in an Always On availability group
configuration.

What is the cost for registering with the SQL IaaS


Agent extension?
None. There is no fee associated with registering with the SQL IaaS Agent extension.
Managing your SQL Server VM with the extension is completely free.

What is the performance impact of using SQL


IaaS Agent extension?
Once you enable a feature that requires installing the agent, there is minimal impact
from the two services that are installed to the OS. These can be monitored via task
manager and seen in the built-in Windows Services console.

The two service names are:

SQLIaaSExtension (Display name - Microsoft SQL Server IaaS Agent )


SqlIaaSExtensionQuery (Display name - Microsoft SQL Server IaaS Query Service )

How do I remove the extension?


Remove the extension by unregistering the SQL Server VM from the SQL IaaS Agent
extension.

Will registering my VM with the new SQL IaaS


Agent extension bring additional costs?
No. The SQL IaaS Agent extension just enables additional manageability for SQL Server
on Azure VM with no additional charges.
Is the SQL IaaS Agent extension available for all
customers?
Yes, as long as the SQL Server VM was deployed on the public cloud using the Resource
Manager model, and not the classic model. All other customers are able to register with
the new SQL IaaS Agent extension. However, only customers with the Software
Assurance benefit can use their own license by activating the Azure Hybrid Benefit
(AHB) on a SQL Server VM.

What happens to the extension


('Microsoft.SqlVirtualMachine') resource if the
VM resource is moved or dropped?
When the Microsoft.Compute/VirtualMachine resource is dropped or moved, then the
associated Microsoft.SqlVirtualMachine resource is notified to asynchronously replicate
the operation.

What happens to the VM if the extension


('Microsoft.SqlVirtualMachine') resource is
dropped?
The Microsoft.Compute/VirtualMachine resource is not impacted when the
Microsoft.SqlVirtualMachine resource is dropped. However, the licensing changes will
default back to the original image source.

Is the extension necessary to receive Extended


Security Updates (ESU)?
No. Extended Security Updates (ESU) are applied automatically to the VM whether or
not your SQL Server VM has registered with the SQL IaaS Agent extension.

What happened to management modes of the


SQL IaaS Agent extension?
Management modes were removed from the SQL IaaS Agent extension architecture.
Starting in March 2023, registering with the SQL IaaS Agent extension initially just copies
the binaries to the SQL Server VM and offers limited functionality. Once you enable a
feature that relies on it, the SQL IaaS Agent is installed to the SQL Server VM.
Can I register my virtual machine image if I'm
using Reporting Services, Power BI Report
Server, or Analysis Services?
No. The SQL IaaS Agent extension is not supported with the following images - SQL
Server Reporting Services, SQL Server Power BI Report Server, SQL Server Analysis
Services.

Resources
Windows VMs:

Overview of SQL Server on a Windows VM


Provision SQL Server on a Windows VM
Migration guide: SQL Server to SQL Server on Azure Virtual Machines
High Availability and Disaster Recovery for SQL Server on Azure Virtual Machines
Performance best practices for SQL Server on Azure Virtual Machines
Application Patterns and Development Strategies for SQL Server on Azure Virtual
Machines

Linux VMs:

Overview of SQL Server on a Linux VM


Provision SQL Server on a Linux VM
FAQ (Linux)
SQL Server on Linux documentation
Pricing guidance for SQL Server on
Azure VMs
Article • 04/20/2023

Applies to:
SQL Server on Azure VM

This article provides pricing guidance for SQL Server on Azure Virtual Machines. There
are several options that affect cost, and it is important to pick the right image that
balances costs with business requirements.

 Tip

If you only need to find out a cost estimate for a specific combination of SQL Server
edition and virtual machine (VM) size, see the pricing page for Windows or
Linux . Select your platform and SQL Server edition from the OS/Software list.

Or use the pricing calculator to add and configure a virtual machine.

Free-licensed SQL Server editions


If you want to develop, test, or build a proof of concept, then use the freely licensed SQL
Server Developer edition. This edition has all the features of SQL Server Enterprise
edition, allowing you to build and test any type of application. However, you cannot run
the Developer edition in production. A SQL Server Developer edition VM only incurs
charges for the cost of the VM, because there are no associated SQL Server licensing
costs.

If you want to run a lightweight workload in production (<4 cores, <1-GB memory, <10
GB/database), use the freely licensed SQL Server Express edition. A SQL Server Express
edition VM also only incurs charges for the cost of the VM.

For these development/test and lightweight production workloads, you can also save
money by choosing a smaller VM size that matches these workloads. The D2as_v5 might
be a good choice in some scenarios.
To create an Azure VM running SQL Server 2022 with one of these images, see the
following links:

Platform Freely licensed images

Windows Server 2022 SQL Server 2022 Developer Azure VM

Ubuntu Pro 20.04 LTS SQL Server 2022 Developer Azure VM

Paid SQL Server editions


If you have a non-lightweight production workload, use one of the following SQL Server
editions:

SQL Server edition Workload

Web Small web sites

Standard Small to medium workloads

Enterprise Large or mission-critical workloads

You have two options to pay for SQL Server licensing for these editions: pay per usage or
Azure Hybrid Benefit.

Pay per usage


Paying the SQL Server license per usage (also known as pay as you go) means that the
per-second cost of running the Azure VM includes the cost of the SQL Server license.
You can see the pricing for the different SQL Server editions (Web, Standard, Enterprise)
in the Azure Virtual Machines pricing page for Windows or Linux .

The cost is the same for all versions of SQL Server (2012 SP3 to 2022). The per-second
licensing cost depends on the number of VM vCPUs.

Paying the SQL Server licensing per usage is recommended for:

Temporary or periodic workloads. For example, an app that needs to support an


event for a couple of months every year, or business analysis on Mondays.

Workloads with unknown lifetime or scale. For example, an app that may not be
required in a few months, or which may require more, or less compute power,
depending on demand.
To create an Azure VM running SQL Server 2022 with one of these pay-as-you-go
images, see the following links:

Platform Licensed images

Windows Server 2022 SQL Server 2022 Web Azure VM

SQL Server 2022 Standard Azure VM

SQL Server 2022 Enterprise Azure VM

Ubuntu Pro 20.04 LTS SQL Server 2022 Web Azure VM

SQL Server 2022 Standard Azure VM

SQL Server 2022 Enterprise Azure VM

) Important

When you create a SQL Server virtual machine in the Azure portal, the Choose a
size window shows an estimated cost. It is important to note that this estimate is
only the compute costs for running the VM along with any OS licensing costs
(Windows or third-party Linux operating systems).

It does not include additional SQL Server licensing costs for Web, Standard, and
Enterprise editions. To get the most accurate pricing estimate, select your operating
system and SQL Server edition on the pricing page for Windows or Linux .
7 Note

It is now possible to change the licensing model from pay-as-you-go to Azure


Hybrid Benefit and back. For more information, see How to change the licensing
model for a SQL Server VM.

Azure Hybrid Benefit (AHB)


Azure Hybrid Benefit , also referred to as AHB, is a program that allows customers to
use existing SQL Server core licenses with Software Assurance in an Azure VM. A SQL
Server VM using AHB only charges for the cost of running the VM, not for SQL Server
licensing, given that you have already acquired licenses and Software Assurance through
a Volume Licensing program or through a Cloud Solution Partner (CSP).

Bringing your own SQL Server licensing through Azure Hybrid Benefit is recommended
for:

Continuous workloads. For example, an app that needs to support business


operations 24x7.

Workloads with known lifetime and scale. For example, an app that is required for
the whole year and which demand has been forecasted.

To use AHB with a SQL Server VM, you must have a license for SQL Server Standard or
Enterprise and Software Assurance , which is a required option through some volume
licensing programs and an optional purchase with others. The pricing level provided
through Volume Licensing programs varies, based on the type of agreement and the
quantity and or commitment to SQL Server. But as a rule of thumb, Azure Hybrid Benefit
for continuous production workloads has the following benefits:

AHB Description
benefit

Cost The Azure Hybrid Benefit offers up to 55% savings. For more information, see
savings Switch licensing model

Free Another benefit of bringing your own license is the free licensing for one passive
passive secondary replica for high availability and one passive secondary for disaster
secondary recovery per SQL Server. This cuts the licensing cost of a highly available SQL Server
replica deployment (for example, using Always On availability groups) by more than half.

7 Note
As of November 2022, it's possible to use free licensing for one passive secondary
replica for high availability and one passive secondary replica for disaster recovery
when using pay-as-you-go licensing as well as AHB .

Reduce costs
To avoid unnecessary costs, choose an optimal virtual machine size and consider
intermittent shutdowns for non-continuous workloads.

Correctly size your VM


The licensing cost of SQL Server is directly related to the number of vCPUs. Choose a
VM size that matches your expected needs for CPU, memory, storage, and I/O
bandwidth. For a complete list of machine size options, see Windows VM sizes and Linux
VM sizes.

For more information on choosing the best VM size for your workload, see VM size best
practices.

Shut down your VM when possible


If you are using any workloads that do not run continuously, consider shutting down the
virtual machine during the inactive periods. You only pay for what you use.

For example, if you are simply trying out SQL Server on an Azure VM, you would not
want to incur charges by accidentally leaving it running for weeks. One solution is to use
the automatic shutdown feature .
Automatic shutdown is part of a larger set of similar features provided by Azure DevTest
Labs .

For other workflows, consider automatically shutting down and restarting Azure VMs
with a scripting solution, such as Azure Automation .

) Important

Shutting down and deallocating your VM is the only way to avoid charges. Simply
stopping or using power options to shut down the VM still incurs usage charges.

Next steps
For general Azure pricing guidance, see Prevent unexpected costs with Azure billing and
cost management. For the latest Azure Virtual Machines pricing, including SQL Server,
see the Azure Virtual Machines pricing page for Windows VMs and Linux VMs .

For an overview of SQL Server on Azure Virtual Machines, see the following articles:

Overview of SQL Server on Windows VMs


Overview of SQL Server on Linux VMs
Download SQL Server Data Tools (SSDT)
for Visual Studio
Article • 07/07/2023

Applies to:
SQL Server
Azure SQL Database
Azure Synapse Analytics

SQL Server Data Tools (SSDT) is a modern development tool for building SQL Server
relational databases, databases in Azure SQL, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, you
can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.

SSDT for Visual Studio 2022

Changes in SSDT for Visual Studio 2022


The core SSDT functionality to create database projects has remained integral to Visual
Studio.

7 Note

There's no SSDT standalone installer for Visual Studio 2022.

Install SSDT with Visual Studio 2022


If Visual Studio 2022 is already installed, you can edit the list of workloads to include
SSDT. If you don't have Visual Studio 2022 installed, then you can download and install
Visual Studio 2022 .

To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.

1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.

3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.

For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .

Analysis Services
Integration Services
Reporting Services

Supported SQL versions in Visual Studio 2022


Project Templates SQL Platforms Supported

Relational databases SQL Server 2016 (13.x) - SQL Server 2022 (16.x)

Azure SQL Database, Azure SQL Managed Instance

Azure Synapse Analytics (dedicated pools only)

Analysis Services models


SQL Server 2016 - SQL Server 2022

Reporting Services reports

Integration Services packages SQL Server 2019 - SQL Server 2022

License terms for Visual Studio


To understand the license terms and use cases for Visual Studio, refer to (Visual Studio
License Directory)[https://visualstudio.microsoft.com/license-terms/]. For example, if you
are using the Community Edition of Visual Studio for SQL Server Data Tools, review the
EULA for that specific edition of Visual Studio in the Visual Studio License Directory.

SSDT for Visual Studio 2019

Changes in SSDT for Visual Studio 2019


The core SSDT functionality to create database projects has remained integral to Visual
Studio.

With Visual Studio 2019, the required functionality to enable Analysis Services,
Integration Services, and Reporting Services projects has moved into the respective
Visual Studio (VSIX) extensions only.

7 Note

There's no SSDT standalone installer for Visual Studio 2019.

Install SSDT with Visual Studio 2019


If Visual Studio 2019 is already installed, you can edit the list of workloads to include
SSDT. If you don't have Visual Studio 2019 installed, then you can download and install
Visual Studio 2019 Community .
To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.

1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".

2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.

3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.

For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .

Analysis Services
Integration Services
Reporting Services

Supported SQL versions in Visual Studio 2019

Project Templates SQL Platforms Supported

Relational databases SQL Server 2012 - SQL Server 2019

Azure SQL Database, Azure SQL Managed Instance

Azure Synapse Analytics (dedicated pools only)

Analysis Services models


SQL Server 2008 - SQL Server 2019

Reporting Services reports

Integration Services packages SQL Server 2012 - SQL Server 2022

Offline installation
For scenarios where offline installation is required, such as low bandwidth or isolated
networks, SSDT is available for offline installation. Two approaches are available:

For a single machine, Download All, then install


For installation on one or more machines, use the Visual Studio bootstrapper from
the command line

For more details you can follow the Step-by-Step Guidelines for Offline Installation

Previous versions
To download and install SSDT for Visual Studio 2017, or an older version of SSDT, see
Previous releases of SQL Server Data Tools (SSDT and SSDT-BI).

See Also
SSDT MSDN Forum

SSDT Team Blog

DACFx API Reference

Download SQL Server Management Studio (SSMS)


Next steps
After installation of SSDT, work through these tutorials to learn how to create databases,
packages, data models, and reports using SSDT.

Project-Oriented Offline Database Development

SSIS Tutorial: Create a Simple ETL Package

Analysis Services tutorials

Create a Basic Table Report (SSRS Tutorial)


Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback


Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


Download SQL Server Management
Studio (SSMS)
Article • 06/28/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics
SQL Endpoint in Microsoft Fabric
Warehouse in
Microsoft Fabric

SQL Server Management Studio (SSMS) is an integrated environment for managing any
SQL infrastructure, from SQL Server to Azure SQL Database. SSMS provides tools to
configure, monitor, and administer instances of SQL Server and databases. Use SSMS to
deploy, monitor, and upgrade the data-tier components used by your applications and
build queries and scripts.

Use SSMS to query, design, and manage your databases and data warehouses, wherever
they are - on your local computer or in the cloud.

Download SSMS

Free Download for SQL Server Management Studio (SSMS) 19.1

SSMS 19.1 is the latest general availability (GA) version. If you have a preview version of
SSMS 19 installed, you should uninstall it before installing SSMS 19.1. If you have SSMS
19.x installed, installing SSMS 19.1 upgrades it to 19.1.

Release number: 19.1


Build number: 19.1.56.0
Release date: May 24, 2023

By using SQL Server Management Studio, you agree to its license terms and privacy
statement . If you have comments or suggestions or want to report issues, the best
way to contact the SSMS team is at SQL Server user feedback .

The SSMS 19.x installation doesn't upgrade or replace SSMS versions 18.x or earlier.
SSMS 19.x installs alongside previous versions, so both versions are available for use.
However, if you have an earlier preview version of SSMS 19 installed, you must uninstall
it before installing SSMS 19.1. You can see if you have a preview version by going to the
Help > About window.

If a computer contains side-by-side installations of SSMS, verify you start the correct
version for your specific needs. The latest version is labeled Microsoft SQL Server
Management Studio v19.1.

) Important

Beginning with SQL Server Management Studio (SSMS) 18.7, Azure Data Studio is
automatically installed alongside SSMS. Users of SQL Server Management Studio
are now able to benefit from the innovations and features in Azure Data Studio.
Azure Data Studio is a cross-platform and open-source desktop tool for your
environments, whether in the cloud, on-premises, or hybrid.

To learn more about Azure Data Studio, check out What is Azure Data Studio or
the FAQ.

Available languages
This release of SSMS can be installed in the following languages:

SQL Server Management Studio 19.1:

Chinese (Simplified) | Chinese (Traditional) | English (United States) | French |


German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian |
Spanish

 Tip

If you are accessing this page from a non-English language version and want to see
the most up-to-date content, please select Read in English at the top of this page.
You can download different languages from the US-English version site by selecting
available languages.

7 Note

The SQL Server PowerShell module is a separate install through the PowerShell
Gallery. For more information, see Download SQL Server PowerShell Module.

What's new
For details and more information about what's new in this release, see Release notes for
SQL Server Management Studio.
Previous versions
This article is for the latest version of SSMS only. To download previous versions of
SSMS, visit Previous SSMS releases.

7 Note

In December 2021, releases of SSMS prior to 18.6 will no longer authenticate to


Database Engines through Azure Active Directory with MFA.
To continue utilizing
Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.

Connectivity to Azure Analysis Services through Azure Active Directory with MFA
requires SSMS 18.5.1 or later.

Unattended install
You can install SSMS using PowerShell.

Follow the steps below if you want to install SSMS in the background with no GUI
prompts.

1. Launch PowerShell with elevated permissions.

2. Type the command below.

PowerShell

$media_path = "<path where SSMS-Setup-ENU.exe file is located>"

$install_path = "<root location where all SSMS files will be


installed>"

$params = " /Install /Quiet SSMSInstallRoot=$install_path"

Start-Process -FilePath $media_path -ArgumentList $params -Wait

Example:

PowerShell

$media_path = "C:\Installers\SSMS-Setup-ENU.exe"

$install_path = "$env:SystemDrive\SSMSto"

$params = "/Install /Quiet SSMSInstallRoot=`"$install_path`""

Start-Process -FilePath $media_path -ArgumentList $params -Wait

You can also pass /Passive instead of /Quiet to see the setup UI.

3. If all goes well, you can see SSMS installed at


%systemdrive%\SSMSto\Common7\IDE\Ssms.exe based on the example. If
something went wrong, you could inspect the error code returned and review the
log file in %TEMP%\SSMSSetup.

Installation with Azure Data Studio


SSMS installs Azure Data Studio by default.
The installation of Azure Data Studio by SSMS is skipped if an equal or higher
version of Azure Data Studio is already installed.
The Azure Data Studio version can be found in the release notes.
The Azure Data Studio system installer requires the same security rights as the
SSMS installer.
The Azure Data Studio installation is completed with the default Azure Data Studio
installation options. These are to create a Start Menu folder and add Azure Data
Studio to PATH. A desktop shortcut isn't created, and Azure Data Studio isn't
registered as a default editor for any file type.
Localization of Azure Data Studio is accomplished through Language Pack
extensions. To localize Azure Data Studio, download the corresponding language
pack from the extension marketplace.
At this time, the installation of Azure Data Studio can be skipped by launching the
SSMS installer with the command line flag DoNotInstallAzureDataStudio=1 .

Uninstall
SSMS may install shared components if it's determined they're missing during SSMS
installation. SSMS won't automatically uninstall these components when you uninstall
SSMS.

The shared components are:

Azure Data Studio


Microsoft OLE DB Driver for SQL Server
Microsoft ODBC Driver 17 for SQL Server
Microsoft Visual C++ 2013 Redistributable (x86)
Microsoft Visual C++ 2017 Redistributable (x86)
Microsoft Visual C++ 2017 Redistributable (x64)
Microsoft Visual Studio Tools for Applications 2019
These components aren't uninstalled because they can be shared with other products. If
uninstalled, you may run the risk of disabling other products.

Supported SQL offerings


This version of SSMS works with SQL Server 2014 and higher and provides the
most significant level of support for working with the latest cloud features in Azure
SQL Database, Azure Synapse Analytics, and Microsoft Fabric.
Additionally, SSMS 19.x can be installed alongside with SSMS 18.x, SSMS 17.x,
SSMS 16.x.
SQL Server Integration Services (SSIS) - SSMS version 17.x or later doesn't support
connecting to the legacy SQL Server Integration Services service. To connect to an
earlier version of the legacy Integration Services, use the version of SSMS aligned
with the version of SQL Server. For example, use SSMS 16.x to connect to the
legacy SQL Server 2016 Integration Services service. SSMS 17.x and SSMS 16.x can
be installed on the same computer. Since the release of SQL Server 2012, the SSIS
Catalog database, SSISDB, is the recommended way to store, manage, run, and
monitor Integration Services packages. For details, see SSIS Catalog.

SSMS System Requirements


The current release of SSMS supports the following 64-bit platforms when used with the
latest available service pack:

Supported Operating Systems:

Windows 11 (64-bit)
Windows 10 (64-bit) version 1607 (10.0.14393) or later
Windows Server 2022 (64-bit)
Windows Server 2019 (64-bit)
Windows Server 2016 (64-bit)

Supported hardware:

1.8 GHz or faster x86 (Intel, AMD) processor. Dual-core or better recommended
2 GB of RAM; 4 GB of RAM recommended (2.5 GB minimum if running on a virtual
machine)
Hard disk space: Minimum of 2 GB up to 10 GB of available space

7 Note
SSMS is available only as a 32-bit application for Windows. If you need a tool that
runs on operating systems other than Windows, we recommend Azure Data Studio.
Azure Data Studio is a cross-platform tool that runs on macOS, Linux, and
Windows. For details, see Azure Data Studio.


Get help for SQL tools
All the ways to get help
SSMS user feedback .
Submit an Azure Data Studio Git issue
Contribute to Azure Data Studio
SQL Client Tools Forum
SQL Server Data Tools - MSDN forum
Support options for business users

Next steps
SQL tools
SQL Server Management Studio documentation
Azure Data Studio
Download SQL Server Data Tools (SSDT)
Latest updates
Azure Data Architecture Guide
SQL Server Blog


Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


SQL tools overview
Article • 04/03/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics
Analytics Platform System (PDW)

To manage your database, you need a tool. Whether your databases run in the cloud, on
Windows, on macOS, or on Linux, your tool doesn't need to run on the same platform as
the database.

You can view the links to the different SQL tools in the following tables.

7 Note

To download SQL Server, see Install SQL Server.

Recommended tools
The following tools provide a graphical user interface (GUI).

Tool Description Operating


system

A light-weight editor that can run on-demand SQL queries, view and Windows

save results as text, JSON, or Excel. Edit data, organize your favorite macOS

database connections, and browse database objects in a familiar Linux


object browsing experience.

Azure Data
Studio

Manage a SQL Server instance or database with full GUI support. Windows
Access, configure, manage, administer, and develop all components
of SQL Server, Azure SQL Database, and Azure Synapse Analytics.
Provides a single comprehensive utility that combines a broad
SQL Server group of graphical tools with a number of rich script editors to
Management provide access to SQL for developers and database administrators
Studio of all skill levels.
(SSMS)
Tool Description Operating
system

A modern development tool for building SQL Server relational Windows


databases, Azure SQL databases, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS)
SQL Server reports. With SSDT, you can design and deploy any SQL Server
Data Tools content type with the same ease as you would develop an
(SSDT) application in Visual Studio .

The mssql extension for Visual Studio Code is the official SQL Windows

Server extension that supports connections to SQL Server and rich macOS

editing experience for T-SQL in Visual Studio Code. Write T-SQL Linux
scripts in a light-weight editor.

Visual Studio
Code

Command-line tools
The tools below are the main command-line tools.

Tool Description Operating


system

bcp The bulk copy program utility (bcp) bulk copies data between an Windows

instance of Microsoft SQL Server and a data file in a user-specified macOS

format. Linux

mssql-cli mssql-cli is an interactive command-line tool for querying SQL Server. Windows

(preview) Also, query SQL Server with a command-line tool that features macOS

IntelliSense, syntax high-lighting, and more. Linux

mssql-conf mssql-conf configures SQL Server running on Linux. Linux

mssql- mssql-scripter is a multi-platform command-line experience for Windows

scripter scripting SQL Server databases. macOS

(preview) Linux

sqlcmd sqlcmd utility lets you enter Transact-SQL statements, system Windows

procedures, and script files at the command prompt. macOS

Linux

sqlpackage sqlpackage is a command-line utility that automates several database Windows

development tasks. macOS

Linux
Tool Description Operating
system

SQL Server SQL Server PowerShell provides cmdlets for working with SQL. Windows

PowerShell macOS

Linux

Migration and other tools


These tools are used to migrate, configure, and provide other features for SQL
databases.

Tool Description

Configuration Use SQL Server Configuration Manager to configure SQL Server services and
Manager configure network connectivity. Configuration Manager runs on Windows

Database Use Database Experimentation Assistant to evaluate a targeted version of SQL


Experimentation for a given workload.
Assistant

Data Migration The Data Migration Assistant tool helps you upgrade to a modern data
Assistant platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.

Distributed Use the Distributed Replay feature to help you assess the impact of future SQL
Replay Server upgrades. Also use Distributed Replay to help assess the impact of
hardware and operating system upgrades, and SQL Server tuning.

ssbdiagnose The ssbdiagnose utility reports issues in Service Broker conversations or the
configuration of Service Broker services.

SQL Server Use SQL Server Migration Assistant to automate database migration to SQL
Migration Server from Microsoft Access, DB2, MySQL, Oracle, and Sybase.
Assistant

If you're looking for additional tools that aren't mentioned on this page, see SQL
Command Prompt Utilities and Download SQL Server extended features and tools
Overview of SQL Server on Linux Azure
Virtual Machines
Article • 09/19/2022

Applies to:
SQL Server on Azure VM

SQL Server on Azure Virtual Machines enables you to use full versions of SQL Server in
the cloud without having to manage any on-premises hardware. SQL Server VMs also
simplify licensing costs when you pay as you go.

Azure virtual machines run in many different geographic regions around the world.
They also offer a variety of machine sizes. The virtual machine image gallery allows you
to create a SQL Server VM with the right version, edition, and operating system. This
makes virtual machines a good option for a many different SQL Server workloads.

If you're new to Azure SQL, check out the SQL Server on Azure VM Overview video from
our in-depth Azure SQL video series:
https://learn.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-
Overview-4-of-61/player

Get started with SQL Server VMs


To get started, choose a SQL Server virtual machine image with your required version,
edition, and operating system. The following sections provide direct links to the Azure
portal for the SQL Server virtual machine gallery images.

 Tip

For more information about how to understand pricing for SQL Server images, see
the pricing page for Linux VMs running SQL Server .

Version Operating system Edition

SQL Server Ubuntu 18.04 Enterprise , Standard , Web ,


2019 Developer

SQL Server Red Hat Enterprise Linux (RHEL) 8 Enterprise , Standard , Web ,
2019 Developer

SQL Server SUSE Linux Enterprise Server Enterprise , Standard , Web ,


2019 (SLES) v12 SP5 Developer
Version Operating system Edition

SQL Server Red Hat Enterprise Linux (RHEL) Enterprise , Standard , Web , Express ,
2017 7.4 Developer

SQL Server SUSE Linux Enterprise Server Enterprise , Standard , Web , Express ,
2017 (SLES) v12 SP2 Developer

SQL Server Ubuntu 16.04 LTS Enterprise , Standard , Web , Express ,


2017 Developer

7 Note

To see the available SQL Server virtual machine images for Windows, see Overview
of SQL Server on Azure Virtual Machines (Windows).

Installed packages
When you configure SQL Server on Linux, you install the Database Engine package and
then several optional packages depending on your requirements. The Linux virtual
machine images for SQL Server automatically install most packages for you. The
following table shows which packages are installed for each distribution.

Distribution Database Tools SQL Server Full-text SSIS HA add-


Engine agent search on

RHEL

SLES

Ubuntu

7 Note

SQL IaaS Agent extension for SQL Server on Azure Linux Virtual Machines is only
available for Ubuntu Linux distribution.

Related products and services

Linux virtual machines


Azure Virtual Machines overview

Storage
Introduction to Microsoft Azure Storage

Networking
Virtual Network overview
IP addresses in Azure
Create a Fully Qualified Domain Name in the Azure portal

SQL
SQL Server on Linux documentation
Azure SQL Database comparison

Next steps
Get started with SQL Server on Linux virtual machines:

Create a SQL Server VM in the Azure portal

Get answers to commonly asked questions about SQL Server VMs on Linux:

SQL Server on Azure Virtual Machines FAQ


Provision a Linux virtual machine
running SQL Server in the Azure portal
Article • 08/30/2022

Applies to:
SQL Server on Azure VM

In this quickstart tutorial, you use the Azure portal to create a Linux virtual machine with
SQL Server 2017 installed. You learn the following:

Create a Linux VM running SQL Server from the gallery


Connect to the new VM with ssh
Change the SA password
Configure for remote connections

Prerequisites
If you don't have an Azure subscription, create a free account before you begin.

Create a Linux VM with SQL Server installed


1. Sign in to the Azure portal .

2. In the left pane, select Create a resource.

3. In the Create a resource pane, select Compute.

4. Select See all next to the Featured heading.

5. In the search box, type SQL Server 2019, and select Enter to start the search.
6. Limit the search results by selecting Operating system > Redhat.

7. Select a SQL Server 2019 Linux image from the search results. This tutorial uses
SQL Server 2019 on RHEL74.

 Tip

The Developer edition lets you test or develop with the features of the
Enterprise edition but no SQL Server licensing costs. You only pay for the cost
of running the Linux VM.

8. Select Create.

Set up your Linux VM


1. In the Basics tab, select your Subscription and Resource Group.
2. In Virtual machine name, enter a name for your new Linux VM.

3. Then, type or select the following values:

Region: Select the Azure region that's right for you.

Availability options: Choose the availability and redundancy option that's


best for your apps and data.

Change size: Select this option to pick a machine size and when done,
choose Select. For more information about VM machine sizes, see VM sizes.
 Tip

For development and functional testing, use a VM size of DS2 or higher. For
performance testing, use DS13 or higher.

Authentication type: Select SSH public key.

7 Note

You have the choice of using an SSH public key or a Password for
authentication. SSH is more secure. For instructions on how to generate
an SSH key, see Create SSH keys on Linux and Mac for Linux VMs in
Azure.

Username: Enter the Administrator name for the VM.

SSH public key: Enter your RSA public key.

Public inbound ports: Choose Allow selected ports and pick the SSH (22)
port in the Select public inbound ports list. In this quickstart, this step is
necessary to connect and complete the SQL Server configuration. If you want
to remotely connect to SQL Server, you will need to manually allow traffic to
the default port (1433) used by Microsoft SQL Server for connections over the
Internet after the virtual machine is created.
4. Make any changes you want to the settings in the following additional tabs or
keep the default settings.

Disks
Networking
Management
Guest config
Tags

5. Select Review + create.

6. In the Review + create pane, select Create.

Connect to the Linux VM


If you already use a BASH shell, connect to the Azure VM using the ssh command. In the
following command, replace the VM user name and IP address to connect to your Linux
VM.

Bash

ssh azureadmin@40.55.55.555

You can find the IP address of your VM in the Azure portal.


If you're running on Windows and don't have a BASH shell, install an SSH client, such as
PuTTY.

1. Download and install PuTTY .

2. Run PuTTY.

3. On the PuTTY configuration screen, enter your VM's public IP address.

4. Select Open and enter your username and password at the prompts.

For more information about connecting to Linux VMs, see Create a Linux VM on Azure
using the Portal.

7 Note

If you see a PuTTY security alert about the server's host key not being cached in the
registry, choose from the following options. If you trust this host, select Yes to add
the key to PuTTy's cache and continue connecting. If you want to carry on
connecting just once, without adding the key to the cache, select No. If you don't
trust this host, select Cancel to abandon the connection.

Change the SA password


The new virtual machine installs SQL Server with a random SA password. Reset this
password before you connect to SQL Server with the SA login.

1. After connecting to your Linux VM, open a new command terminal.

2. Change the SA password with the following commands:

Bash
sudo systemctl stop mssql-server

sudo /opt/mssql/bin/mssql-conf set-sa-password

Enter a new SA password and password confirmation when prompted.

3. Restart the SQL Server service.

Bash

sudo systemctl start mssql-server

Add the tools to your path (optional)


Several SQL Server packages are installed by default, including the SQL Server
command-line Tools package. The tools package contains the sqlcmd and bcp tools. For
convenience, you can optionally add the tools path, /opt/mssql-tools/bin/ , to your
PATH environment variable.

1. Run the following commands to modify the PATH for both login sessions and
interactive/non-login sessions:

Bash

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

source ~/.bashrc

Configure for remote connections


If you need to remotely connect to SQL Server on the Azure VM, you must configure an
inbound rule on the network security group. The rule allows traffic on the port on which
SQL Server listens (default of 1433). The following steps show how to use the Azure
portal for this step.

 Tip

If you selected the inbound port MS SQL (1433) in the settings during provisioning,
these changes have been made for you. You can go to the next section on how to
configure the firewall.
1. In the portal, select Virtual machines, and then select your SQL Server VM.

2. In the left navigation pane, under Settings, select Networking.

3. In the Networking window, select Add inbound port under Inbound Port Rules.

4. In the Service list, select MS SQL.


5. Click OK to save the rule for your VM.

Open the firewall on RHEL


This tutorial directed you to create a Red Hat Enterprise Linux (RHEL) VM. If you want to
connect remotely to RHEL VMs, you also have to open up port 1433 on the Linux
firewall.

1. Connect to your RHEL VM.

2. In the BASH shell, run the following commands:

Bash

sudo firewall-cmd --zone=public --add-port=1433/tcp --permanent

sudo firewall-cmd --reload

Next steps
Now that you have a SQL Server 2017 virtual machine in Azure, you can connect locally
with sqlcmd to run Transact-SQL queries.

If you configured the Azure VM for remote SQL Server connections, you should be able
to connect remotely. For an example of how to connect remotely to SQL Server on Linux
from Windows, see Use SSMS on Windows to connect to SQL Server on Linux. To
connect with Visual Studio Code, see Use Visual Studio Code to create and run Transact-
SQL scripts for SQL Server

For more general information about SQL Server on Linux, see Overview of SQL Server
2017 on Linux. For more information about using SQL Server 2017 Linux virtual
machines, see Overview of SQL Server 2017 virtual machines on Azure.
SQL Server IaaS Agent extension for
Linux
Article • 05/22/2023

Applies to:
SQL Server on Azure VM

The SQL Server IaaS Agent extension (SqlIaasExtension) runs on SQL Server on Linux
Azure Virtual Machines (VMs) to automate management and administration tasks.

This article provides an overview of the extension. See Register with the extension to
learn more.

Overview
The SQL Server IaaS Agent extension enables integration with the Azure portal and
unlocks the following benefits for SQL Server on Linux Azure VMs:

Compliance: The extension offers a simplified method to fulfill the requirement of


notifying Microsoft that the Azure Hybrid Benefit has been enabled as is specified
in the product terms. This process negates needing to manage licensing
registration forms for each resource.

Simplified license management: The extension simplifies SQL Server license


management, and allows you to quickly identify SQL Server VMs with the Azure
Hybrid Benefit enabled using the Azure portal, Azure PowerShell or the Azure CLI:

PowerShell

PowerShell

Get-AzSqlVM | Where-Object {$_.LicenseType -eq 'AHUB'}

Free: There is no additional cost associated with the extension.

Installation
Register your SQL Server VM with the SQL Server IaaS Agent extension to create the
SQL virtual machine resource within your subscription, which is a separate resource from
the virtual machine resource. Unregistering your SQL Server VM from the extension will
remove the SQL virtual machine resource from your subscription but will not drop the
actual virtual machine.

The SQL Server IaaS Agent extension for Linux is currently only available with limited
functionality.

Verify extension status


Use the Azure portal or Azure PowerShell to check the status of the extension.

Azure portal
Verify the extension is installed by using the Azure portal.

Go to your Virtual machine resource in the Azure portal (not the SQL virtual machines
resource, but the resource for your VM). Select Extensions under Settings. You should
see the SqlIaasExtension extension listed, as in the following example:

Azure PowerShell
You can also use the Get-AzVMSqlServerExtension Azure PowerShell cmdlet:

PowerShell
Get-AzVMSqlServerExtension -VMName "vmname" -ResourceGroupName
"resourcegroupname"

The previous command confirms that the agent is installed and provides general status
information. You can get specific status information about automated backup and
patching by using the following commands:

PowerShell

$sqlext = Get-AzVMSqlServerExtension -VMName "vmname" -ResourceGroupName


"resourcegroupname"

$sqlext.AutoPatchingSettings

$sqlext.AutoBackupSettings

Limitations
The Linux SQL IaaS Agent extension has the following limitations:

Only SQL Server VMs running on the Ubuntu Linux operating system are
supported. Other Linux distributions are not currently supported.
SQL Server VMs running Ubuntu Linux Pro are not supported.
SQL Server VMs running on generalized images are not supported.
Only SQL Server VMs deployed through the Azure Resource Manager are
supported. SQL Server VMs deployed through the classic model are not supported.
SQL Server with only a single instance. Multiple instances are not supported.

Privacy statement
When using SQL Server on Azure VMs and the SQL IaaS Agent extension, consider the
following privacy statements:

Data collection: The SQL IaaS Agent extension collects data for the express
purpose of giving customers optional benefits when using SQL Server on Azure
Virtual Machines. Microsoft will not use this data for licensing audits without the
customer's advance consent. See the SQL Server privacy supplement for more
information.

In-region data residency: SQL Server on Azure VMs and SQL IaaS Agent Extension
do not move or store customer data out of the region in which the VMs are
deployed.
Next steps
For more information about running SQL Server on Azure Virtual Machines, see the
What is SQL Server on Azure Linux Virtual Machines?.

To learn more, see frequently asked questions.


Register Linux SQL Server VM with SQL
IaaS Agent extension
Article • 03/17/2023

Applies to:
SQL Server on Azure VM

Register your SQL Server VM with the SQL IaaS Agent extension to unlock a wealth of
feature benefits for your SQL Server on Linux Azure VM.

Overview
Registering with the SQL Server IaaS Agent extension creates the SQL virtual machine
resource within your subscription, which is a separate resource from the virtual machine
resource. Unregistering your SQL Server VM from the extension removes the SQL virtual
machine resource but will not drop the actual virtual machine.

To utilize the SQL IaaS Agent extension, you must first register your subscription with
the Microsoft.SqlVirtualMachine provider, which gives the SQL IaaS Agent extension
the ability to create resources within that specific subscription.

) Important

The SQL IaaS Agent extension collects data for the express purpose of giving
customers optional benefits when using SQL Server within Azure Virtual Machines.
Microsoft will not use this data for licensing audits without the customer's advance
consent. See the SQL Server privacy supplement for more information.

Prerequisites
To register your SQL Server VM with the extension, you'll need:

An Azure subscription .
An Azure Resource Model Ubuntu Linux virtual machine with SQL Server 2017 (or
greater) deployed to the public or Azure Government cloud.
The latest version of Azure CLI or Azure PowerShell (5.0 minimum).

Register subscription with RP


To register your SQL Server VM with the SQL IaaS Agent extension, you must first
register your subscription with the Microsoft.SqlVirtualMachine resource provider (RP).
This gives the SQL IaaS Agent extension the ability to create resources within your
subscription. You can do so by using the Azure portal, the Azure CLI, or Azure
PowerShell.

Azure portal
Register your subscription with the resource provider by using the Azure portal:

1. Open the Azure portal and go to All Services.


2. Go to Subscriptions and select the subscription of interest.
3. On the Subscriptions page, select Resource providers under Settings.
4. Enter sql in the filter to bring up the SQL-related resource providers.
5. Select Register, Re-register, or Unregister for the Microsoft.SqlVirtualMachine
provider, depending on your desired action.

Command line
Register your Azure subscription with the Microsoft.SqlVirtualMachine provider using
either Azure CLI or Azure PowerShell.

Azure CLI

Register your subscription with the resource provider by using the Azure CLI:

Azure CLI

# Register the SQL IaaS Agent extension to your subscription

az provider register --namespace Microsoft.SqlVirtualMachine

Register VM
The SQL IaaS Agent extension on Linux is only available in lightweight mode, which
supports only changing the license type and edition of SQL Server. Use the Azure CLI or
Azure PowerShell to register your SQL Server VM with the extension in lightweight
mode for limited functionality.

Provide the SQL Server license type as either pay-as-you-go ( PAYG ) to pay per usage,
Azure Hybrid Benefit ( AHUB ) to use your own license, or disaster recovery ( DR ) to
activate the free DR replica license.

Azure CLI

Register a SQL Server VM in lightweight mode with the Azure CLI:

Azure CLI

# Register Enterprise or Standard self-installed VM in Lightweight mode

az sql vm create --name <vm_name> --resource-group <resource_group_name>


--location <vm_location> --license-type <license_type>

Verify registration status


You can verify if your SQL Server VM has already been registered with the SQL IaaS
Agent extension by using the Azure portal, the Azure CLI, or Azure PowerShell.

Azure portal
Verify the registration status by using the Azure portal:

1. Sign in to the Azure portal .


2. Go to your SQL virtual machines resource.
3. Select your SQL Server VM from the list. If your SQL Server VM is not listed here, it
likely hasn't been registered with the SQL IaaS Agent extension.

Command line
Verify current SQL Server VM registration status using either Azure CLI or Azure
PowerShell. ProvisioningState shows as Succeeded if registration was successful.
Azure CLI

Verify the registration status by using the Azure CLI:

Azure CLI

az sql vm show -n <vm_name> -g <resource_group>

An error indicates that the SQL Server VM has not been registered with the extension.

Automatic registration
Automatic registration is supported for Ubuntu Linux VMs.

Next steps
For more information, see the following articles:

Overview of SQL Server on a Windows VM


FAQ for SQL Server on a Windows VM
Pricing guidance for SQL Server on a Windows VM
Release notes for SQL Server on a Windows VM
Tutorial: Configure availability groups
for SQL Server on RHEL virtual machines
in Azure
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

7 Note

We use SQL Server 2017 with RHEL 7.6 in this tutorial, but it is possible to use SQL
Server 2019 in RHEL 7 or RHEL 8 to configure high availability. The commands to
configure the pacemake cluster and availability group resources has changed in
RHEL 8, and you'll want to look at the article Create availability group resource and
RHEL 8 resources for more information on the correct commands.

In this tutorial, you learn how to:

" Create a new resource group, availability set, and Linux virtual machines (VMs)
" Enable high availability (HA)
" Create a Pacemaker cluster
" Configure a fencing agent by creating a STONITH device
" Install SQL Server and mssql-tools on RHEL
" Configure SQL Server Always On availability group
" Configure availability group (AG) resources in the Pacemaker cluster
" Test a failover and the fencing agent

This tutorial will use the Azure CLI to deploy resources in Azure.

If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see
Quickstart for Bash in Azure Cloud Shell.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you're
running on Windows or macOS, consider running Azure CLI in a Docker container.
For more information, see How to run the Azure CLI in a Docker container.

If you're using a local installation, sign in to the Azure CLI by using the az login
command. To finish the authentication process, follow the steps displayed in
your terminal. For other sign-in options, see Sign in with the Azure CLI.

When you're prompted, install the Azure CLI extension on first use. For more
information about extensions, see Use extensions with the Azure CLI.

Run az version to find the version and dependent libraries that are installed. To
upgrade to the latest version, run az upgrade.

This article requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud
Shell, the latest version is already installed.

Create a resource group


If you have more than one subscription, set the subscription that you want deploy these
resources to.

Use the following command to create a resource group <resourceGroupName> in a


region. Replace <resourceGroupName> with a name of your choosing. We're using East
US 2 for this tutorial. For more information, see the following Quickstart.

Azure CLI

az group create --name <resourceGroupName> --location eastus2

Create an availability set


The next step is to create an availability set. Run the following command in Azure Cloud
Shell, and replace <resourceGroupName> with your resource group name. Choose a name
for <availabilitySetName> .

Azure CLI

az vm availability-set create \

--resource-group <resourceGroupName> \

--name <availabilitySetName> \

--platform-fault-domain-count 2 \

--platform-update-domain-count 2

You should get the following results once the command completes:
Output

"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/availabilitySets/<availabilitySetName>",

"location": "eastus2",

"name": "<availabilitySetName>",

"platformFaultDomainCount": 2,

"platformUpdateDomainCount": 2,
"proximityPlacementGroup": null,

"resourceGroup": "<resourceGroupName>",

"sku": {

"capacity": null,

"name": "Aligned",

"tier": null

},

"statuses": null,

"tags": {},

"type": "Microsoft.Compute/availabilitySets",

"virtualMachines": []

Create RHEL VMs inside the availability set

2 Warning

If you choose a Pay-As-You-Go (PAYG) RHEL image, and configure high availability
(HA), you may be required to register your subscription. This can cause you to pay
twice for the subscription, as you will be charged for the Microsoft Azure RHEL
subscription for the VM, and a subscription to Red Hat. For more information, see
https://access.redhat.com/solutions/2458541 .

To avoid being "double billed", use a RHEL HA image when creating the Azure VM.
Images offered as RHEL-HA images are also PAYG images with HA repo pre-
enabled.

1. Get a list of virtual machine images that offer RHEL with HA:

Azure CLI

az vm image list --all --offer "RHEL-HA"

You should see the following results:


Output

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "7.4",

"urn": "RedHat:RHEL-HA:7.4:7.4.2019062021",

"version": "7.4.2019062021"

},

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "7.5",

"urn": "RedHat:RHEL-HA:7.5:7.5.2019062021",

"version": "7.5.2019062021"

},

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "7.6",

"urn": "RedHat:RHEL-HA:7.6:7.6.2019062019",

"version": "7.6.2019062019"

},

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "8.0",

"urn": "RedHat:RHEL-HA:8.0:8.0.2020021914",

"version": "8.0.2020021914"

},

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "8.1",

"urn": "RedHat:RHEL-HA:8.1:8.1.2020021914",

"version": "8.1.2020021914"

},

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "80-gen2",

"urn": "RedHat:RHEL-HA:80-gen2:8.0.2020021915",

"version": "8.0.2020021915"

},

"offer": "RHEL-HA",

"publisher": "RedHat",

"sku": "81_gen2",

"urn": "RedHat:RHEL-HA:81_gen2:8.1.2020021915",

"version": "8.1.2020021915"

For this tutorial, we're choosing the image RedHat:RHEL-HA:7.6:7.6.2019062019 for


the RHEL 7 example and choosing RedHat:RHEL-HA:8.1:8.1.2020021914 for the
RHEL 8 example.

You can also choose SQL Server 2019 pre-installed on RHEL8-HA images. To get
the list of these images, run the following command:

Azure CLI

az vm image list --all --offer "sql2019-rhel8"

You should see the following results:

Output

"offer": "sql2019-rhel8",

"publisher": "MicrosoftSQLServer",

"sku": "enterprise",

"urn": "MicrosoftSQLServer:sql2019-rhel8:enterprise:15.0.200317",

"version": "15.0.200317"

},

"offer": "sql2019-rhel8",

"publisher": "MicrosoftSQLServer",

"sku": "enterprise",

"urn": "MicrosoftSQLServer:sql2019-rhel8:enterprise:15.0.200512",

"version": "15.0.200512"

},

"offer": "sql2019-rhel8",

"publisher": "MicrosoftSQLServer",

"sku": "sqldev",

"urn": "MicrosoftSQLServer:sql2019-rhel8:sqldev:15.0.200317",

"version": "15.0.200317"

},

"offer": "sql2019-rhel8",

"publisher": "MicrosoftSQLServer",

"sku": "sqldev",

"urn": "MicrosoftSQLServer:sql2019-rhel8:sqldev:15.0.200512",

"version": "15.0.200512"

},

"offer": "sql2019-rhel8",

"publisher": "MicrosoftSQLServer",

"sku": "standard",

"urn": "MicrosoftSQLServer:sql2019-rhel8:standard:15.0.200317",

"version": "15.0.200317"

},

"offer": "sql2019-rhel8",

"publisher": "MicrosoftSQLServer",

"sku": "standard",

"urn": "MicrosoftSQLServer:sql2019-rhel8:standard:15.0.200512",

"version": "15.0.200512"

If you do use one of the above images to create the virtual machines, it has SQL
Server 2019 pre-installed. Skip the Install SQL Server and mssql-tools section as
described in this article.

) Important

Machine names must be less than 15 characters to set up availability group.


Username cannot contain upper case characters, and passwords must have
more than 12 characters.

2. We want to create 3 VMs in the availability set. Replace the following in the
command below:

<resourceGroupName>

<VM-basename>
<availabilitySetName>

<VM-Size> - An example would be "Standard_D16_v3"

<username>
<adminPassword>

Azure CLI

for i in `seq 1 3`; do

az vm create \

--resource-group <resourceGroupName> \

--name <VM-basename>$i \

--availability-set <availabilitySetName> \

--size "<VM-Size>" \

--image "RedHat:RHEL-HA:7.6:7.6.2019062019" \

--admin-username "<username>" \

--admin-password "<adminPassword>" \

--authentication-type all \

--generate-ssh-keys

done

The above command creates the VMs, and creates a default VNet for those VMs. For
more information on the different configurations, see the az vm create article.

You should get results similar to the following once the command completes for each
VM:

Output

"fqdns": "",

"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/virtualMachines/<VM1>",

"location": "eastus2",

"macAddress": "<Some MAC address>",

"powerState": "VM running",

"privateIpAddress": "<IP1>",

"publicIpAddress": "",

"resourceGroup": "<resourceGroupName>",

"zones": ""

) Important

The default image that is created with the above command creates a 32GB OS disk
by default. You could potentially run out of space with this default installation. You
can use the following parameter added to the above az vm create command to
create an OS disk with 128GB as an example: --os-disk-size-gb 128 .

You can then configure Logical Volume Manager (LVM) if you need to expand
appropriate folder volumes to accomodate your installation.

Test connection to the created VMs


Connect to VM1 or the other VMs using the following command in Azure Cloud Shell. If
you are unable to find your VM IPs, follow this Quickstart on Azure Cloud Shell.

Azure CLI

ssh <username>@publicipaddress

If the connection is successful, you should see the following output representing the
Linux terminal:
Output

[<username>@<VM1> ~]$

Type exit to leave the SSH session.

Enable high availability

) Important

In order to complete this portion of the tutorial, you must have a subscription for
RHEL and the High Availability Add-on. If you are using an image recommended in
the previous section, you do not have to register another subscription.

Connect to each VM node and follow the guide below to enable HA. For more
information, see enable high availability subscription for RHEL.

 Tip

It will be easier if you open an SSH session to each of the VMs simultaneously as
the same commands will need to be run on each VM throughout the article.

If you are copying and pasting multiple sudo commands, and are prompted for a
password, the additional commands will not run. Run each command separately.

1. Run the following commands on each VM to open the Pacemaker firewall ports:

Bash

sudo firewall-cmd --permanent --add-service=high-availability

sudo firewall-cmd --reload

2. Update and install Pacemaker packages on all nodes using the following
commands:

7 Note

nmap is installed as part of this command block as a tool to find available IP


addresses in your network. You do not have to install nmap, but it will be
useful later in this tutorial.
Bash

sudo yum update -y

sudo yum install -y pacemaker pcs fence-agents-all resource-agents


fence-agents-azure-arm nmap

sudo reboot

3. Set the password for the default user that is created when installing Pacemaker
packages. Use the same password on all nodes.

Bash

sudo passwd hacluster

4. Use the following command to open the hosts file and set up host name
resolution. For more information, see Configure AG on configuring the hosts file.

sudo vi /etc/hosts

In the vi editor, enter i to insert text, and on a blank line, add the Private IP of the
corresponding VM. Then add the VM name after a space next to the IP. Each line
should have a separate entry.

Output

<IP1> <VM1>

<IP2> <VM2>

<IP3> <VM3>

) Important

We recommend that you use your Private IP address above. Using the Public
IP address in this configuration will cause the setup to fail and we don't
recommend exposing your VM to external networks.

To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit.

Create the Pacemaker cluster


In this section, we will enable and start the pcsd service, and then configure the cluster.
For SQL Server on Linux, the cluster resources are not created automatically. We'll need
to enable and create the pacemaker resources manually. For more information, see the
article on configuring a failover cluster instance for RHEL

Enable and start pcsd service and Pacemaker


1. Run the commands on all nodes. These commands allow the nodes to rejoin the
cluster after reboot.

Bash

sudo systemctl enable pcsd

sudo systemctl start pcsd

sudo systemctl enable pacemaker

2. Remove any existing cluster configuration from all nodes. Run the following
command:

Bash

sudo pcs cluster destroy

sudo systemctl enable pacemaker

3. On the primary node, run the following commands to set up the cluster.

When running the pcs cluster auth command to authenticate the cluster
nodes, you will be prompted for a password. Enter the password for the
hacluster user created earlier.

RHEL7

Bash

sudo pcs cluster auth <VM1> <VM2> <VM3> -u hacluster

sudo pcs cluster setup --name az-hacluster <VM1> <VM2> <VM3> --token
30000

sudo pcs cluster start --all

sudo pcs cluster enable --all

RHEL8

For RHEL 8, you will need to authenticate the nodes separately. Manually enter in
the username and password for hacluster when prompted.
Bash

sudo pcs host auth <node1> <node2> <node3>

sudo pcs cluster setup <clusterName> <node1> <node2> <node3>

sudo pcs cluster start --all

sudo pcs cluster enable --all

4. Run the following command to check that all nodes are online.

Bash

sudo pcs status

RHEL 7

If all nodes are online, you will see an output similar to the following:

Output

Cluster name: az-hacluster

WARNINGS:

No stonith devices and stonith-enabled is not false

Stack: corosync

Current DC: <VM2> (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition


with quorum

Last updated: Fri Aug 23 18:27:57 2019

Last change: Fri Aug 23 18:27:56 2019 by hacluster via crmd on <VM2>

3 nodes configured

0 resources configured

Online: [ <VM1> <VM2> <VM3> ]

No resources

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

RHEL 8

Output

Cluster name: az-hacluster

WARNINGS:

No stonith devices and stonith-enabled is not false

Cluster Summary:

* Stack: corosync

* Current DC: <VM2> (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition


with quorum

* Last updated: Fri Aug 23 18:27:57 2019

* Last change: Fri Aug 23 18:27:56 2019 by hacluster via crmd on <VM2>

* 3 nodes configured

* 0 resource instances configured

Node List:

* Online: [ <VM1> <VM2> <VM3> ]

Full List of Resources:

* No resources

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

5. Set expected votes in the live cluster to 3. This command only affects the live
cluster, and does not change the configuration files.

On all nodes, set the expected votes with the following command:

Bash

sudo pcs quorum expected-votes 3

Configure the fencing agent


A STONITH device provides a fencing agent. The below instructions are modified for this
tutorial. For more information, see create a STONITH device.

Check the version of the Azure Fence Agent to ensure that it's updated. Use the
following command:

Bash

sudo yum info fence-agents-azure-arm

You should see a similar output to the below example.


Output

Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-


manager

Installed Packages

Name : fence-agents-azure-arm

Arch : x86_64

Version : 4.2.1

Release : 11.el7_6.8

Size : 28 k

Repo : installed

From repo : rhel-ha-for-rhel-7-server-eus-rhui-rpms

Summary : Fence agent for Azure Resource Manager

URL : https://github.com/ClusterLabs/fence-agents

License : GPLv2+ and LGPLv2+

Description : The fence-agents-azure-arm package contains a fence agent for


Azure instances.

Register a new application in Azure Active Directory


1. Go to https://portal.azure.com
2. Open the Azure Active Directory blade . Go to Properties and write down the
Directory ID. This is the tenant ID
3. Click App registrations
4. Click New registration
5. Enter a Name like <resourceGroupName>-app , select Accounts in this organization
directory only
6. Select Application Type Web, enter a sign-on URL (for example http://localhost)
and click Add. The sign-on URL is not used and can be any valid URL. Once done,
Click Register
7. Select Certificates and secrets for your new App registration, then click New client
secret
8. Enter a description for a new key (client secret), select Never expires and click Add
9. Write down the value of the secret. It is used as the password for the Service
Principal
10. Select Overview. Write down the Application ID. It is used as the username (login
ID in the steps below) of the Service Principal

Create a custom role for the fence agent


Follow the tutorial to Create an Azure custom role using Azure CLI.

Your json file should look similar to the following:


Replace <username> with a name of your choice. This is to avoid any duplication
when creating this role definition.
Replace <subscriptionId> with your Azure Subscription ID.

JSON

"Name": "Linux Fence Agent Role-<username>",

"Id": null,

"IsCustom": true,

"Description": "Allows to power-off and start virtual machines",


"Actions": [

"Microsoft.Compute/*/read",

"Microsoft.Compute/virtualMachines/powerOff/action",

"Microsoft.Compute/virtualMachines/start/action"

],

"NotActions": [

],

"AssignableScopes": [

"/subscriptions/<subscriptionId>"

To add the role, run the following command:

Replace <filename> with the name of the file.


If you are executing the command from a path other than the folder that the file is
saved to, include the folder path of the file in the command.

Azure CLI

az role definition create --role-definition "<filename>.json"

You should see the following output:

Output

"assignableScopes": [

"/subscriptions/<subscriptionId>"

],

"description": "Allows to power-off and start virtual machines",

"id":
"/subscriptions/<subscriptionId>/providers/Microsoft.Authorization/roleDefin
itions/<roleNameId>",

"name": "<roleNameId>",

"permissions": [

"actions": [

"Microsoft.Compute/*/read",

"Microsoft.Compute/virtualMachines/powerOff/action",

"Microsoft.Compute/virtualMachines/start/action"

],

"dataActions": [],

"notActions": [],

"notDataActions": []

],

"roleName": "Linux Fence Agent Role-<username>",

"roleType": "CustomRole",

"type": "Microsoft.Authorization/roleDefinitions"

Assign the custom role to the Service Principal


Assign the custom role Linux Fence Agent Role-<username> that was created in the last
step to the Service Principal. Do not use the Owner role anymore!

1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine of the first cluster node
4. Click Access control (IAM)
5. Click Add a role assignment
6. Select the role Linux Fence Agent Role-<username> from the Role list
7. In the Select list, enter the name of the application you created above,
<resourceGroupName>-app

8. Click Save
9. Repeat the steps above for the all cluster node.

Create the STONITH devices


Run the following commands on node 1:

Replace the <ApplicationID> with the ID value from your application registration.
Replace the <servicePrincipalPassword> with the value from the client secret.
Replace the <resourceGroupName> with the resource group from your subscription
used for this tutorial.
Replace the <tenantID> and the <subscriptionId> from your Azure Subscription.

Bash

sudo pcs property set stonith-timeout=900

sudo pcs stonith create rsc_st_azure fence_azure_arm login="<ApplicationID>"


passwd="<servicePrincipalPassword>" resourceGroup="<resourceGroupName>"
tenantId="<tenantID>" subscriptionId="<subscriptionId>" power_timeout=240
pcmk_reboot_timeout=900

Since we already added a rule to our firewall to allow the HA service ( --add-
service=high-availability ), there's no need to open the following firewall ports on all
nodes: 2224, 3121, 21064, 5405. However, if you are experiencing any type of
connection issues with HA, use the following command to open these ports that are
associated with HA.

 Tip

You can optionally add all ports in this tutorial at once to save some time. The ports
that need to be opened are explained in their relative sections below. If you would
like to add all ports now, add the additional ports: 1433 and 5022.

Bash

sudo firewall-cmd --zone=public --add-port=2224/tcp --add-port=3121/tcp --


add-port=21064/tcp --add-port=5405/tcp --permanent

sudo firewall-cmd --reload

Install SQL Server and mssql-tools

7 Note

If you have created the VMs with the SQL Server 2019 pre-installed on RHEL8-HA
then you can skip the below steps to install SQL Server and mssql-tools and start
the Configure an Availability Group section after you setup the sa password on all
the VMs by running the command sudo /opt/mssql/bin/mssql-conf set-sa-
password on all VMs.

Use the below section to install SQL Server and mssql-tools on the VMs. You can choose
one of the below samples to install SQL Server 2017 on RHEL 7 or SQL Server 2019 on
RHEL 8. Perform each of these actions on all nodes. For more information, see Install
SQL Server on a Red Hat VM.

Installing SQL Server on the VMs


The following commands are used to install SQL Server:
RHEL 7 with SQL Server 2017

Bash

sudo curl -o /etc/yum.repos.d/mssql-server.repo


https://packages.microsoft.com/config/rhel/7/mssql-server-2017.repo

sudo yum install -y mssql-server

sudo /opt/mssql/bin/mssql-conf setup

sudo yum install mssql-server-ha

RHEL 8 with SQL Server 2019

Bash

sudo curl -o /etc/yum.repos.d/mssql-server.repo


https://packages.microsoft.com/config/rhel/8/mssql-server-2019.repo

sudo yum install -y mssql-server

sudo /opt/mssql/bin/mssql-conf setup

sudo yum install mssql-server-ha

Open firewall port 1433 for remote connections


You'll need to open port 1433 on the VM in order to connect remotely. Use the
following commands to open port 1433 in the firewall of each VM:

Bash

sudo firewall-cmd --zone=public --add-port=1433/tcp --permanent

sudo firewall-cmd --reload

Installing SQL Server command-line tools


The following commands are used to install SQL Server command-line tools. For more
information, see install the SQL Server command-line tools.

RHEL 7

Bash

sudo curl -o /etc/yum.repos.d/msprod.repo


https://packages.microsoft.com/config/rhel/7/prod.repo

sudo yum install -y mssql-tools unixODBC-devel

RHEL 8
Bash

sudo curl -o /etc/yum.repos.d/msprod.repo


https://packages.microsoft.com/config/rhel/8/prod.repo

sudo yum install -y mssql-tools unixODBC-devel

7 Note

For convenience, add /opt/mssql-tools/bin/ to your PATH environment variable.


This enables you to run the tools without specifying the full path. Run the following
commands to modify the PATH for both login sessions and interactive/non-login
sessions:

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

source ~/.bashrc

Check the status of the SQL Server


Once you are done with the configuration, you can check the status of SQL Server and
verify that it is running:

Bash

systemctl status mssql-server --no-pager

You should see the following output:

Output

● mssql-server.service - Microsoft SQL Server Database Engine

Loaded: loaded (/usr/lib/systemd/system/mssql-server.service; enabled;


vendor preset: disabled)

Active: active (running) since Thu 2019-12-05 17:30:55 UTC; 20min ago

Docs: https://learn.microsoft.com/sql/linux

Main PID: 11612 (sqlservr)

CGroup: /system.slice/mssql-server.service

├─11612 /opt/mssql/bin/sqlservr

└─11640 /opt/mssql/bin/sqlservr

Configure an availability group


Use the following steps to configure a SQL Server Always On availability group for your
VMs. For more information, see Configure SQL Server Always On availability groups for
high availability on Linux

Enable Always On availability groups and restart mssql-


server
Enable Always On availability groups on each node that hosts a SQL Server instance.
Then restart mssql-server. Run the following script:

sudo /opt/mssql/bin/mssql-conf set hadr.hadrenabled 1

sudo systemctl restart mssql-server

Create a certificate
We currently don't support AD authentication to the AG endpoint. Therefore, we must
use a certificate for AG endpoint encryption.

1. Connect to all nodes using SQL Server Management Studio (SSMS) or SQL CMD.
Run the following commands to enable an AlwaysOn_health session and create a
master key:

) Important

If you are connecting remotely to your SQL Server instance, you will need to
have port 1433 open on your firewall. You'll also need to allow inbound
connections to port 1433 in your NSG for each VM. For more information, see
Create a security rule for creating an inbound security rule.

Replace the <Master_Key_Password> with your own password.

SQL

ALTER EVENT SESSION AlwaysOn_health ON SERVER WITH (STARTUP_STATE=ON);

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<Master_Key_Password>';

2. Connect to the primary replica using SSMS or SQL CMD. The below commands will
create a certificate at /var/opt/mssql/data/dbm_certificate.cer and a private key
at var/opt/mssql/data/dbm_certificate.pvk on your primary SQL Server replica:

Replace the <Private_Key_Password> with your own password.

SQL

CREATE CERTIFICATE dbm_certificate WITH SUBJECT = 'dbm';


GO

BACKUP CERTIFICATE dbm_certificate

TO FILE = '/var/opt/mssql/data/dbm_certificate.cer'

WITH PRIVATE KEY (

FILE = '/var/opt/mssql/data/dbm_certificate.pvk',

ENCRYPTION BY PASSWORD = '<Private_Key_Password>'

);

GO

Exit the SQL CMD session by running the exit command, and return back to your SSH
session.

Copy the certificate to the secondary replicas and create


the certificates on the server
1. Copy the two files that were created to the same location on all servers that will
host availability replicas.

On the primary server, run the following scp command to copy the certificate to
the target servers:

Replace <username> and <VM2> with the user name and target VM name that
you are using.
Run this command for all secondary replicas.

7 Note

You don't have to run sudo -i , which gives you the root environment. You
could just run the sudo command in front of each command as we previously
did in this tutorial.

Bash

# The below command allows you to run commands in the root environment

sudo -i

Bash

scp /var/opt/mssql/data/dbm_certificate.*
<username>@<VM2>:/home/<username>

2. On the target server, run the following command:

Replace <username> with your user name.


The mv command moves the files or directory from one place to another.
The chown command is used to change the owner and group of files,
directories, or links.
Run these commands for all secondary replicas.

Bash

sudo -i

mv /home/<username>/dbm_certificate.* /var/opt/mssql/data/

cd /var/opt/mssql/data

chown mssql:mssql dbm_certificate.*

3. The following Transact-SQL script creates a certificate from the backup that you
created on the primary SQL Server replica. Update the script with strong
passwords. The decryption password is the same password that you used to create
the .pvk file in the previous step. To create the certificate, run the following script
using SQL CMD or SSMS on all secondary servers:

SQL

CREATE CERTIFICATE dbm_certificate

FROM FILE = '/var/opt/mssql/data/dbm_certificate.cer'

WITH PRIVATE KEY (

FILE = '/var/opt/mssql/data/dbm_certificate.pvk',

DECRYPTION BY PASSWORD = '<Private_Key_Password>'

);

GO

Create the database mirroring endpoints on all replicas


Run the following script on all SQL Server instances using SQL CMD or SSMS:

SQL

CREATE ENDPOINT [Hadr_endpoint]

AS TCP (LISTENER_PORT = 5022)

FOR DATABASE_MIRRORING (

ROLE = ALL,

AUTHENTICATION = CERTIFICATE dbm_certificate,

ENCRYPTION = REQUIRED ALGORITHM AES

);

GO

ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED;

GO

Create the availability group


Connect to the SQL Server instance that hosts the primary replica using SQL CMD or
SSMS. Run the following command to create the availability group:

Replace ag1 with your desired Availability Group name.


Replace the <VM1> , <VM2> , and <VM3> values with the names of the SQL Server
instances that host the replicas.

SQL

CREATE AVAILABILITY GROUP [ag1]

WITH (DB_FAILOVER = ON, CLUSTER_TYPE = EXTERNAL)

FOR REPLICA ON

N'<VM1>'

WITH (

ENDPOINT_URL = N'tcp://<VM1>:5022',

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = EXTERNAL,

SEEDING_MODE = AUTOMATIC

),

N'<VM2>'

WITH (

ENDPOINT_URL = N'tcp://<VM2>:5022',

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = EXTERNAL,

SEEDING_MODE = AUTOMATIC

),

N'<VM3>'

WITH(

ENDPOINT_URL = N'tcp://<VM3>:5022',

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = EXTERNAL,

SEEDING_MODE = AUTOMATIC

);

GO

ALTER AVAILABILITY GROUP [ag1] GRANT CREATE ANY DATABASE;

GO

Create a SQL Server login for Pacemaker


On all SQL Server instances, create a SQL Server login for Pacemaker. The following
Transact-SQL creates a login.

Replace <password> with your own complex password.

SQL

USE [master]

GO

CREATE LOGIN [pacemakerLogin] with PASSWORD= N'<password>';

GO

ALTER SERVER ROLE [sysadmin] ADD MEMBER [pacemakerLogin];

GO

On all SQL Server instances, save the credentials used for the SQL Server login.

1. Create the file:

Bash

sudo vi /var/opt/mssql/secrets/passwd

2. Add the following 2 lines to the file:

Bash

pacemakerLogin

<password>

To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit.

3. Make the file only readable by root:

Bash

sudo chown root:root /var/opt/mssql/secrets/passwd

sudo chmod 400 /var/opt/mssql/secrets/passwd

Join secondary replicas to the availability group


1. In order to join the secondary replicas to the AG, you'll need to open port 5022 on
the firewall for all servers. Run the following command in your SSH session:

Bash

sudo firewall-cmd --zone=public --add-port=5022/tcp --permanent

sudo firewall-cmd --reload

2. On your secondary replicas, run the following commands to join them to the AG:

SQL

ALTER AVAILABILITY GROUP [ag1] JOIN WITH (CLUSTER_TYPE = EXTERNAL);

GO

ALTER AVAILABILITY GROUP [ag1] GRANT CREATE ANY DATABASE;

GO

3. Run the following Transact-SQL script on the primary replica and each secondary
replica:

SQL

GRANT ALTER, CONTROL, VIEW DEFINITION ON AVAILABILITY GROUP::ag1 TO


pacemakerLogin;

GO

GRANT VIEW SERVER STATE TO pacemakerLogin;

GO

4. Once the secondary replicas are joined, you can see them in SSMS Object Explorer
by expanding the Always On High Availability node:
Add a database to the availability group
We will follow the configure availability group article on adding a database.

The following Transact-SQL commands are used in this step. Run these commands on
the primary replica:

SQL

CREATE DATABASE [db1]; -- creates a database named db1

GO

ALTER DATABASE [db1] SET RECOVERY FULL; -- set the database in full recovery
mode

GO

BACKUP DATABASE [db1] -- backs up the database to disk

TO DISK = N'/var/opt/mssql/data/db1.bak';

GO

ALTER AVAILABILITY GROUP [ag1] ADD DATABASE [db1]; -- adds the database db1
to the AG

GO

Verify that the database is created on the secondary


servers
On each secondary SQL Server replica, run the following query to see if the db1
database was created and is in a SYNCHRONIZED state:
SELECT * FROM sys.databases WHERE name = 'db1';

GO

SELECT DB_NAME(database_id) AS 'database', synchronization_state_desc FROM


sys.dm_hadr_database_replica_states;

If the synchronization_state_desc lists SYNCHRONIZED for db1 , this means the replicas
are synchronized. The secondaries are showing db1 in the primary replica.

Create availability group resources in the


Pacemaker cluster
We will be following the guide to create the availability group resources in the
Pacemaker cluster.

7 Note

This article contains references to the term slave, a term that Microsoft no longer
uses. When the term is removed from the software, we'll remove it from this article.

Create the AG cluster resource


1. Use one of the following commands based on the environment chosen earlier to
create the resource ag_cluster in the availability group ag1 .

RHEL 7

Bash

sudo pcs resource create ag_cluster ocf:mssql:ag ag_name=ag1 meta


failure-timeout=30s master notify=true

RHEL 8

Bash

sudo pcs resource create ag_cluster ocf:mssql:ag ag_name=ag1 meta


failure-timeout=30s promotable notify=true

2. Check your resource and ensure that they are online before proceeding using the
following command:
Bash

sudo pcs resource

You should see the following output:

RHEL 7

Output

[<username>@VM1 ~]$ sudo pcs resource

Master/Slave Set: ag_cluster-master [ag_cluster]

Masters: [ <VM1> ]

Slaves: [ <VM2> <VM3> ]

RHEL 8

Output

[<username>@VM1 ~]$ sudo pcs resource

* Clone Set: ag_cluster-clone [ag_cluster] (promotable):

* ag_cluster (ocf::mssql:ag) : Slave VMrhel3


(Monitoring)

* ag_cluster (ocf::mssql:ag) : Master VMrhel1


(Monitoring)

* ag_cluster (ocf::mssql:ag) : Slave VMrhel2


(Monitoring)

Create a virtual IP resource


1. Use an available static IP address from your network to create a virtual IP resource.
You can find one using the command tool nmap .

Bash

nmap -sP <IPRange>

# For example: nmap -sP 10.0.0.*

# The above will scan for all IP addresses that are already occupied in
the 10.0.0.x space.

2. Set the stonith-enabled property to false

Bash

sudo pcs property set stonith-enabled=false

3. Create the virtual IP resource by using the following command:

Replace the <availableIP> value below with an unused IP address.

Bash

sudo pcs resource create virtualip ocf:heartbeat:IPaddr2 ip=


<availableIP>

Add constraints
1. To ensure that the IP address and the AG resource are running on the same node,
a colocation constraint must be configured. Run the following command:

RHEL 7

Bash

sudo pcs constraint colocation add virtualip ag_cluster-master INFINITY


with-rsc-role=Master

RHEL 8

Bash

sudo pcs constraint colocation add virtualip with master ag_cluster-


clone INFINITY with-rsc-role=Master

2. Create an ordering constraint to ensure that the AG resource is up and running


before the IP address. While the colocation constraint implies an ordering
constraint, this enforces it.

RHEL 7

Bash

sudo pcs constraint order promote ag_cluster-master then start


virtualip

RHEL 8

Bash

sudo pcs constraint order promote ag_cluster-clone then start virtualip

3. To verify the constraints, run the following command:

Bash

sudo pcs constraint list --full

You should see the following output:

RHEL 7

Location Constraints:

Ordering Constraints:

promote ag_cluster-master then start virtualip (kind:Mandatory)


(id:order-ag_cluster-master-virtualip-mandatory)

Colocation Constraints:

virtualip with ag_cluster-master (score:INFINITY) (with-rsc-


role:Master) (id:colocation-virtualip-ag_cluster-master-INFINITY)

Ticket Constraints:

RHEL 8

Output

Location Constraints:

Ordering Constraints:

promote ag_cluster-clone then start virtualip (kind:Mandatory)


(id:order-ag_cluster-clone-virtualip-mandatory)

Colocation Constraints:

virtualip with ag_cluster-clone (score:INFINITY) (with-rsc-


role:Master) (id:colocation-virtualip-ag_cluster-clone-INFINITY)

Ticket Constraints:

Re-enable stonith
We're ready for testing. Re-enable stonith in the cluster by running the following
command on Node 1:

Bash

sudo pcs property set stonith-enabled=true

Check cluster status


You can check the status of your cluster resources using the following command:

Output

[<username>@VM1 ~]$ sudo pcs status

Cluster name: az-hacluster

Stack: corosync

Current DC: <VM3> (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with


quorum

Last updated: Sat Dec 7 00:18:38 2019

Last change: Sat Dec 7 00:18:02 2019 by root via cibadmin on VM1

3 nodes configured

5 resources configured

Online: [ <VM1> <VM2> <VM3> ]

Full list of resources:

Master/Slave Set: ag_cluster-master [ag_cluster]

Masters: [ <VM2> ]

Slaves: [ <VM1> <VM3> ]

virtualip (ocf::heartbeat:IPaddr2): Started <VM2>

rsc_st_azure (stonith:fence_azure_arm): Started <VM1>

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

Test failover
To ensure that the configuration has succeeded so far, we will test a failover. For more
information, see Always On availability group failover on Linux.

1. Run the following command to manually fail over the primary replica to <VM2> .
Replace <VM2> with the value of your server name.

RHEL 7

Bash

sudo pcs resource move ag_cluster-master <VM2> --master

RHEL 8

Bash
sudo pcs resource move ag_cluster-clone <VM2> --master

You can also specify an additional option so that the temporary constraint that's
created to move the resource to a desired node is disabled automatically, and you
do not have to perform steps 2 and 3 below.

RHEL 7

Bash

sudo pcs resource move ag_cluster-master <VM2> --master lifetime=30S

RHEL 8

Bash

sudo pcs resource move ag_cluster-clone <VM2> --master lifetime=30S

Another alternative to automate steps 2 and 3 below which clear the temporary
constraint in the resource move command itself is by combining multiple
commands in a single line.

RHEL 7

Bash

sudo pcs resource move ag_cluster-master <VM2> --master && sleep 30 &&
pcs resource clear ag_cluster-master

RHEL 8

Bash

sudo pcs resource move ag_cluster-clone <VM2> --master && sleep 30 &&
pcs resource clear ag_cluster-clone

2. If you check your constraints again, you'll see that another constraint was added
because of the manual failover:

RHEL 7

Output
[<username>@VM1 ~]$ sudo pcs constraint list --full

Location Constraints:

Resource: ag_cluster-master
Enabled on: VM2 (score:INFINITY) (role: Master) (id:cli-prefer-
ag_cluster-master)

Ordering Constraints:

promote ag_cluster-master then start virtualip (kind:Mandatory)


(id:order-ag_cluster-master-virtualip-mandatory)

Colocation Constraints:

virtualip with ag_cluster-master (score:INFINITY) (with-rsc-


role:Master) (id:colocation-virtualip-ag_cluster-master-INFINITY)

Ticket Constraints:

RHEL 8

Output

[<username>@VM1 ~]$ sudo pcs constraint list --full

Location Constraints:

Resource: ag_cluster-master
Enabled on: VM2 (score:INFINITY) (role: Master) (id:cli-prefer-
ag_cluster-clone)

Ordering Constraints:

promote ag_cluster-clone then start virtualip (kind:Mandatory)


(id:order-ag_cluster-clone-virtualip-mandatory)

Colocation Constraints:

virtualip with ag_cluster-clone (score:INFINITY) (with-rsc-


role:Master) (id:colocation-virtualip-ag_cluster-clone-INFINITY)

Ticket Constraints:

3. Remove the constraint with ID cli-prefer-ag_cluster-master using the following


command:

RHEL 7

Bash

sudo pcs constraint remove cli-prefer-ag_cluster-master

RHEL 8

Bash

sudo pcs constraint remove cli-prefer-ag_cluster-clone

4. Check your cluster resources using the command sudo pcs resource , and you
should see that the primary instance is now <VM2> .
Output

[<username>@<VM1> ~]$ sudo pcs resource

Master/Slave Set: ag_cluster-master [ag_cluster]

ag_cluster (ocf::mssql:ag): FAILED <VM1> (Monitoring)

Masters: [ <VM2> ]

Slaves: [ <VM3> ]

virtualip (ocf::heartbeat:IPaddr2): Started <VM2>

[<username>@<VM1> ~]$ sudo pcs resource

Master/Slave Set: ag_cluster-master [ag_cluster]

Masters: [ <VM2> ]

Slaves: [ <VM1> <VM3> ]

virtualip (ocf::heartbeat:IPaddr2): Started <VM2>

Test fencing
You can test STONITH by running the following command. Try running the below
command from <VM1> for <VM3> .

Bash

sudo pcs stonith fence <VM3> --debug

7 Note

By default, the fence action brings the node off and then on. If you only want to
bring the node offline, use the option --off in the command.

You should get the following output:

Output

[<username>@<VM1> ~]$ sudo pcs stonith fence <VM3> --debug

Running: stonith_admin -B <VM3>

Return Value: 0

--Debug Output Start--

--Debug Output End--

Node: <VM3> fenced

For more information on testing a fence device, see the following Red Hat article.

Next steps
In order to utilize an availability group listener for your SQL Server instances, you will
need to create and configure a load balancer.

Tutorial: Configure an availability group listener for SQL Server on RHEL virtual
machines in Azure
Tutorial: Configure availability groups
for SQL Server on SLES virtual machines
in Azure
Article • 03/10/2023

Applies to:
SQL Server on Azure VM

7 Note

We use SQL Server 2022 (16.x) with SUSE Linux Enterprise Server (SLES) v15 in this
tutorial, but it is possible to use SQL Server 2019 (15.x) with SLES v12 or SLES v15,
to configure high availability.

In this tutorial, you learn how to:

" Create a new resource group, availability set, and Linux virtual machines (VMs)
" Enable high availability (HA)
" Create a Pacemaker cluster
" Configure a fencing agent by creating a STONITH device
" Install SQL Server and mssql-tools on SLES
" Configure SQL Server Always On availability group
" Configure availability group (AG) resources in the Pacemaker cluster
" Test a failover and the fencing agent

This tutorial uses the Azure CLI to deploy resources in Azure.

If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see
Quickstart for Bash in Azure Cloud Shell.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you're
running on Windows or macOS, consider running Azure CLI in a Docker container.
For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login
command. To finish the authentication process, follow the steps displayed in
your terminal. For other sign-in options, see Sign in with the Azure CLI.

When you're prompted, install the Azure CLI extension on first use. For more
information about extensions, see Use extensions with the Azure CLI.

Run az version to find the version and dependent libraries that are installed. To
upgrade to the latest version, run az upgrade.

This article requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud
Shell, the latest version is already installed.

Create a resource group


If you've more than one subscription, set the subscription that you want deploy these
resources to.

Use the following command to create a resource group <resourceGroupName> in a


region. Replace <resourceGroupName> with a name of your choosing. This tutorial uses
East US 2 . For more information, see the following Quickstart.

Azure CLI

az group create --name <resourceGroupName> --location eastus2

Create an availability set


The next step is to create an availability set. Run the following command in Azure Cloud
Shell, and replace <resourceGroupName> with your resource group name. Choose a name
for <availabilitySetName> .

Azure CLI

az vm availability-set create \

--resource-group <resourceGroupName> \

--name <availabilitySetName> \

--platform-fault-domain-count 2 \

--platform-update-domain-count 2

You should get the following results once the command completes:

Output
{

"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/availabilitySets/<availabilitySetName>",

"location": "eastus2",

"name": "<availabilitySetName>",

"platformFaultDomainCount": 2,

"platformUpdateDomainCount": 2,
"proximityPlacementGroup": null,

"resourceGroup": "<resourceGroupName>",

"sku": {

"capacity": null,

"name": "Aligned",

"tier": null

},

"statuses": null,

"tags": {},

"type": "Microsoft.Compute/availabilitySets",

"virtualMachines": []

Create a virtual network and subnet


1. Create a named subnet with a pre-assigned IP address range. Replace these values
in the following command:

<resourceGroupName>

<vNetName>
<subnetName>

Azure CLI

az network vnet create \

--resource-group <resourceGroupName> \

--name <vNetName> \
--address-prefix 10.1.0.0/16 \

--subnet-name <subnetName> \

--subnet-prefix 10.1.1.0/24

The previous command creates a VNet and a subnet containing a custom IP range.

Create SLES VMs inside the availability set


1. Get a list of virtual machine images that offer SLES v15 SP4 with BYOS (bring your
own subscription). You can also use the SUSE Enterprise Linux 15 SP4 + Patching
VM ( sles-15-sp4-basic ).

Azure CLI

az vm image list --all --offer "sles-15-sp3-byos"

# if you want to search the basic offers you could search using the
command below

az vm image list --all --offer "sles-15-sp3-basic"

You should see the following results when you search for the BYOS images:

Output

"offer": "sles-15-sp3-byos",

"publisher": "SUSE",

"sku": "gen1",

"urn": "SUSE:sles-15-sp3-byos:gen1:2022.05.05",

"version": "2022.05.05"

},

"offer": "sles-15-sp3-byos",

"publisher": "SUSE",

"sku": "gen1",

"urn": "SUSE:sles-15-sp3-byos:gen1:2022.07.19",

"version": "2022.07.19"

},

"offer": "sles-15-sp3-byos",

"publisher": "SUSE",

"sku": "gen1",

"urn": "SUSE:sles-15-sp3-byos:gen1:2022.11.10",

"version": "2022.11.10"

},

"offer": "sles-15-sp3-byos",

"publisher": "SUSE",

"sku": "gen2",

"urn": "SUSE:sles-15-sp3-byos:gen2:2022.05.05",

"version": "2022.05.05"

},

"offer": "sles-15-sp3-byos",

"publisher": "SUSE",

"sku": "gen2",

"urn": "SUSE:sles-15-sp3-byos:gen2:2022.07.19",

"version": "2022.07.19"

},

"offer": "sles-15-sp3-byos",

"publisher": "SUSE",

"sku": "gen2",

"urn": "SUSE:sles-15-sp3-byos:gen2:2022.11.10",

"version": "2022.11.10"

This tutorial uses SUSE:sles-15-sp3-byos:gen1:2022.11.10 .

) Important

Machine names must be less than 15 characters in length to set up an


availability group. Usernames cannot contain upper case characters, and
passwords must have between 12 and 72 characters.

2. Create three VMs in the availability set. Replace these values in the following
command:

<resourceGroupName>

<VM-basename>
<availabilitySetName>

<VM-Size> - An example would be "Standard_D16s_v3"


<username>

<adminPassword>

<vNetName>
<subnetName>

Azure CLI

for i in `seq 1 3`; do

az vm create \

--resource-group <resourceGroupName> \

--name <VM-basename>$i \

--availability-set <availabilitySetName> \

--size "<VM-Size>" \

--os-disk-size-gb 128 \

--image "SUSE:sles-15-sp3-byos:gen1:2022.11.10" \

--admin-username "<username>" \

--admin-password "<adminPassword>" \

--authentication-type all \

--generate-ssh-keys \

--vnet-name "<vNetName>" \

--subnet "<subnetName>" \

--public-ip-sku Standard \

--public-ip-address ""

done

The previous command creates the VMs using the previously defined VNet. For more
information on the different configurations, see the az vm create article.

The command also includes the --os-disk-size-gb parameter to create a custom OS


drive size of 128 GB. If you increase this size later, expand appropriate folder volumes to
accommodate your installation, configure the Logical Volume Manager (LVM).

You should get results similar to the following once the command completes for each
VM:

Output

"fqdns": "",

"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/virtualMachines/sles1",

"location": "westus",

"macAddress": "<Some MAC address>",

"powerState": "VM running",

"privateIpAddress": "<IP1>",

"resourceGroup": "<resourceGroupName>",

"zones": ""

Test connection to the created VMs


Connect to each of the VMs using the following command in Azure Cloud Shell. If you're
unable to find your VM IPs, follow this Quickstart on Azure Cloud Shell.

Azure CLI

ssh <username>@<publicIPAddress>

If the connection is successful, you should see the following output representing the
Linux terminal:

Output

[<username>@sles1 ~]$

Type exit to leave the SSH session.


Register with SUSEConnect and install high
availability packages
In order to complete this tutorial, your VMs must be registered with SUSEConnect to
receive updates and support. You can then install the High Availability Extension
module, or pattern, which is a set of packages that enables HA.

It is easier to open an SSH session on each of the VMs (nodes) simultaneously, as the
same commands must be run on each VM throughout the article.

If you're copying and pasting multiple sudo commands and are prompted for a
password, the additional commands won't run. Run each command separately.

Connect to each VM node to run the following steps.

Register the VM with SUSEConnect


To register your VM node with SUSEConnect, replace these values in the following
command, on all the nodes:

<subscriptionEmailAddress>
<registrationCode>

Bash

sudo SUSEConnect

--url=https://scc.suse.com

-e <subscriptionEmailAddress> \

-r <registrationCode>

Install High Availability Extension


To install the High Availability Extension, run the following command on all the nodes:

Bash

sudo SUSEConnect -p sle-ha/15.3/x86_64 -r <registration code for Partner


Subscription for High Availability Extension>

Configure passwordless SSH access between


nodes
Passwordless SSH access allows your VMs to communicate with each other using SSH
public keys. You must configure SSH keys on each node, and copy those keys to each
node.

Generate new SSH keys


The required SSH key size is 4,096 bits. On each VM, change to the /root/.ssh folder,
and run the following command:

Bash

ssh-keygen -t rsa -b 4096

During this step, you may be prompted to overwrite an existing SSH file. You must agree
to this prompt. You don't need to enter a passphrase.

Copy the public SSH keys


On each VM, you must copy the public key from the node you just created, using the
ssh-copy-id command. If you want to specify the target directory on the target VM, you

can use the -i parameter.

In the following command, the <username> account can be the same account you
configured for each node when creating the VM. You can also use the root account, but
this isn't recommended in a production environment.

Bash

sudo ssh-copy-id <username>@sles1


sudo ssh-copy-id <username>@sles2
sudo ssh-copy-id <username>@sles3

Verify passwordless access from each node


To confirm that the SSH public key was copied to each node, use the ssh command
from each node. If you copied the keys correctly, you won't be prompted for a
password, and the connection will be successful.

In this example, we are connecting to the second and third nodes from the first VM
( sles1 ). Once again the <username> account can be the same account you configured
for each node when creating the VM
Bash

ssh <username>@sles2

ssh <username>@sles3

Repeat this process from all three nodes, so that each node can communicate with the
others without requiring passwords.

Configure name resolution


You can configure name resolution using either DNS, or by manually editing the
etc/hosts file on each node.

For more information about DNS and Active Directory, see Join SQL Server on a Linux
host to an Active Directory domain.

) Important

We recommend that you use your private IP address in the previous example.
Using the public IP address in this configuration will cause the setup to fail, and
would expose your VM to external networks.

The VMs and their IP address used in this example are listed as follows:

sles1 : 10.0.0.85

sles2 : 10.0.0.86

sles3 : 10.0.0.87

Configure the cluster


For this tutorial, your first VM ( sles1 ) is node 1, your second VM ( sles2 ) is node 2, and
your third VM ( sles3 ) is node 3. For more information on cluster installation, see Set up
Pacemaker on SUSE Linux Enterprise Server in Azure.

Cluster installation
1. Run the following command to install the ha-cluster-bootstrap package on node
1, and then restart the node. In this example, it is the sles1 VM.

Bash
sudo zypper install ha-cluster-bootstrap

After the node is restarted, run the following command to deploy the cluster:

Bash

sudo crm cluster init --name sqlcluster

You'll see a similar output to the following example:

Output

Do you want to continue anyway (y/n)? y

Generating SSH key for root

The user 'hacluster' will have the login shell configuration changed
to /bin/bash

Continue (y/n)? y

Generating SSH key for hacluster

Configuring csync2

Generating csync2 shared key (this may take a while)...done

csync2 checking files...done

Detected cloud platform: microsoft-azure

Configure Corosync (unicast):

This will configure the cluster messaging layer. You will need

to specify a network address over which to communicate (default

is eth0's network, but you can use the network address of any

active interface).

Address for ring0 [10.0.0.85]

Port for ring0 [5405]

Configure SBD:

If you have shared storage, for example a SAN or iSCSI target,

you can use it avoid split-brain scenarios by configuring SBD.

This requires a 1 MB partition, accessible to all nodes in the

cluster. The device path must be persistent and consistent

across all nodes in the cluster, so /dev/disk/by-id/* devices

are a good choice. Note that all data on the partition you

specify here will be destroyed.

Do you wish to use SBD (y/n)? n

WARNING: Not configuring SBD - STONITH will be disabled.

Hawk cluster interface is now running. To see cluster status, open:

https://10.0.0.85:7630/

Log in with username 'hacluster', password 'linux'

WARNING: You should change the hacluster password to something more


secure!

Waiting for cluster..............done

Loading initial cluster configuration

Configure Administration IP Address:

Optionally configure an administration virtual IP

address. The purpose of this IP address is to

provide a single IP that can be used to interact

with the cluster, rather than using the IP address

of any specific cluster node.

Do you wish to configure a virtual IP address (y/n)? y

Virtual IP []10.0.0.89

Configuring virtual IP (10.0.0.89)....done

Configure Qdevice/Qnetd:

QDevice participates in quorum decisions. With the assistance of

a third-party arbitrator Qnetd, it provides votes so that a cluster

is able to sustain more node failures than standard quorum rules

allow. It is recommended for clusters with an even number of nodes

and highly recommended for 2 node clusters.

Do you want to configure QDevice (y/n)? n

Done (log saved to /var/log/crmsh/ha-cluster-bootstrap.log)

2. Check the status of the cluster on node 1 using the following command:

Bash

sudo crm status

Your output should include the following text if it was successful:

Output

1 node configured

1 resource instance configured

3. On all nodes, change the password for hacluster to something more secure using
the following command. You must also change your root user password:

Bash

sudo passwd hacluster

Bash

sudo passwd root

4. Run the following command on node 2 and node 3 to first install the crmsh
package:
Bash

sudo zypper install crmsh

Now, run the command to join the cluster:

Bash

sudo crm cluster join

Here are some of the interactions to expect:

Output

Join This Node to Cluster:

You will be asked for the IP address of an existing node, from which

configuration will be copied. If you have not already configured

passwordless ssh between nodes, you will be prompted for the root

password of the existing node.

IP address or hostname of existing node (e.g.: 192.168.1.1)


[]10.0.0.85

Configuring SSH passwordless with root@10.0.0.85

root@10.0.0.85's password:

Configuring SSH passwordless with hacluster@10.0.0.85

Configuring csync2...done

Merging known_hosts

WARNING: scp to sles2 failed (Exited with error code 1, Error output:
The authenticity of host 'sles2 (10.1.1.5)' can't be established.

ECDSA key fingerprint is


SHA256:UI0iyfL5N6X1ZahxntrScxyiamtzsDZ9Ftmeg8rSBFI.

Are you sure you want to continue connecting (yes/no/[fingerprint])?

lost connection

), known_hosts update may be incomplete

Probing for new partitions...done


Address for ring0 [10.0.0.86]

Hawk cluster interface is now running. To see cluster status, open:

https://10.0.0.86:7630/

Log in with username 'hacluster', password 'linux'

WARNING: You should change the hacluster password to something more


secure!

Waiting for cluster.....done

Reloading cluster configuration...done

Done (log saved to /var/log/crmsh/ha-cluster-bootstrap.log)

5. Once you've joined all machines to the cluster, check your resource to see if all
VMs are online:
Bash

sudo crm status

You should see the following output:

Output

Stack: corosync

Current DC: sles1 (version 2.0.5+20201202.ba59be712-150300.4.30.3-


2.0.5+20201202.ba59be712) - partition with quorum

Last updated: Mon Mar 6 18:01:17 2023

Last change: Mon Mar 6 17:10:09 2023 by root via cibadmin on sles1

3 nodes configured

1 resource instance configured

Online: [ sles1 sles2 sles3 ]

Full list of resources:

admin-ip (ocf::heartbeat:IPaddr2): Started sles1

6. Install the cluster resource component. Run the following command on all nodes.

Bash

sudo zypper in socat

7. Install the azure-lb component. Run the following command on all nodes.

Bash

sudo zypper in resource-agents

8. Configure the operating system. Go through the following steps on all nodes.

a. Edit the configuration file:

Bash

sudo vi /etc/systemd/system.conf

b. Change the DefaultTasksMax value to 4096 :

ini
#DefaultTasksMax=512

DefaultTasksMax=4096

c. Save and exit the vi editor.

d. To activate this setting, run the following command:

Bash

sudo systemctl daemon-reload

e. Test if the change was successful:

Bash

sudo systemctl --no-pager show | grep DefaultTasksMax

9. Reduce the size of the dirty cache. Go through the following steps on all nodes.

a. Edit the system control configuration file:

Bash

sudo vi /etc/sysctl.conf

b. Add the following two lines to the file:

ini

vm.dirty_bytes = 629145600

vm.dirty_background_bytes = 314572800

c. Save and exit the vi editor.

10. Install the Azure Python SDK on all nodes with the following commands:

Bash

sudo zypper install fence-agents

# Install the Azure Python SDK on SLES 15 or later:

# You might need to activate the public cloud extension first. In this
example, the SUSEConnect command is for SLES 15 SP1

SUSEConnect -p sle-module-public-cloud/15.1/x86_64

sudo zypper install python3-azure-mgmt-compute

sudo zypper install python3-azure-identity

Configure fencing agent


A STONITH device provides a fencing agent. The below instructions are modified for this
tutorial. For more information, see Create an Azure fence agent STONITH device.

Check the version of the Azure fence agent to ensure that it's updated. Use the
following command:

Bash

sudo zypper info resource-agents

You should see a similar output to the below example.

Output

Information for package resource-agents:

----------------------------------------

Repository : SLE-Product-HA15-SP3-Updates

Name : resource-agents

Version : 4.8.0+git30.d0077df0-150300.8.37.1

Arch : x86_64

Vendor : SUSE LLC <https://www.suse.com/>

Support Level : Level 3

Installed Size : 2.5 MiB

Installed : Yes (automatically)

Status : up-to-date

Source package : resource-agents-4.8.0+git30.d0077df0-150300.8.37.1.src

Upstream URL : http://linux-ha.org/

Summary : HA Reusable Cluster Resource Scripts

Description : A set of scripts to interface with several services

to operate in a High Availability environment for both

Pacemaker and rgmanager service managers.

Register new application in Azure Active Directory


1. Go to https://portal.azure.com
2. Open the Azure Active Directory pane . Go to Properties and write down the
Directory ID. This is your tenant ID .
3. Navigate to App registrations > New registration.
4. Enter a Name such as <resourceGroupName>-app , and select Accounts in this
organization directory only.
5. Select Application Type Web, enter a sign-on URL (for example http://localhost )
and select Add. The sign-on URL isn't used and can be any valid URL. Once done,
select Register.
6. Select Certificates and secrets for your new App registration, then select New
client secret.
7. Enter a description for a new key (client secret), select Never expires and select
Add.
8. Write down the value of the secret. It is used as the password for the service
principal.
9. Select Overview. Write down the Application ID. It is used as the username (login
ID in the steps later in this section) of the service principal.

Create custom role for the fence agent


Follow the tutorial to Create an Azure custom role using Azure CLI.

Your JSON file should look similar to the following example.

Replace <username> with a name of your choice. This is to avoid any duplication
when creating this role definition.
Replace <subscriptionId> with your Azure Subscription ID.

JSON

"Name": "Linux Fence Agent Role-<username>",

"Id": null,

"IsCustom": true,

"Description": "Allows to power-off and start virtual machines",


"Actions": [

"Microsoft.Compute/*/read",

"Microsoft.Compute/virtualMachines/powerOff/action",

"Microsoft.Compute/virtualMachines/start/action"

],

"NotActions": [

],

"AssignableScopes": [

"/subscriptions/<subscriptionId>"

To add the role, run the following command:

Replace <filename> with the name of the file.


If you're executing the command from a path other than the folder that the file is
saved to, include the folder path of the file in the command.

Bash
az role definition create --role-definition "<filename>.json"

You should see the following output:

Output

"assignableScopes": [

"/subscriptions/<subscriptionId>"

],

"description": "Allows to power-off and start virtual machines",

"id":
"/subscriptions/<subscriptionId>/providers/Microsoft.Authorization/roleDefin
itions/<roleNameId>",

"name": "<roleNameId>",

"permissions": [

"actions": [

"Microsoft.Compute/*/read",

"Microsoft.Compute/virtualMachines/powerOff/action",

"Microsoft.Compute/virtualMachines/start/action"

],

"dataActions": [],

"notActions": [],

"notDataActions": []

],

"roleName": "Linux Fence Agent Role-<username>",

"roleType": "CustomRole",

"type": "Microsoft.Authorization/roleDefinitions"

Assign the custom role to the service principal


Assign the custom role Linux Fence Agent Role-<username> that was created in the last
step, to the service principal. Repeat these steps for all nodes.

2 Warning

Don't use the Owner role from here on.

1. Go to https://portal.azure.com
2. Open the All resources pane
3. Select the virtual machine of the first cluster node
4. Select Access control (IAM)
5. Select Add role assignments
6. Select the role Linux Fence Agent Role-<username> from the Role list
7. Leave Assign access to as the default Users, group, or service principal .
8. In the Select list, enter the name of the application you created previously, for
example <resourceGroupName>-app .
9. Select Save.

Create the STONITH devices


1. Run the following commands on node 1:

Replace the <ApplicationID> with the ID value from your application


registration.
Replace the <servicePrincipalPassword> with the value from the client secret.
Replace the <resourceGroupName> with the resource group from your
subscription used for this tutorial.
Replace the <tenantID> and the <subscriptionId> from your Azure
Subscription.

2. Run crm configure to open the crm prompt:

Bash

sudo crm configure

3. In the crm prompt, run the following command to configure the resource
properties, which creates the resource called rsc_st_azure as shown in the
following example:

Bash

primitive rsc_st_azure stonith:fence_azure_arm params


subscriptionId="subscriptionID" resourceGroup="ResourceGroup_Name"
tenantId="TenantID" login="ApplicationID"
passwd="servicePrincipalPassword" pcmk_monitor_retries=4
pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900
pcmk_host_map="sles1:sles1;sles2:sles2;sles3:sles3" op monitor
interval=3600 timeout=120

commit

quit

4. Run the following commands to configure the fencing agent:

Bash
sudo crm configure property stonith-timeout=900

sudo crm configure property stonith-enabled=true

sudo crm configure property concurrent-fencing=true

5. Check the status of your cluster to see that STONITH has been enabled:

Bash

sudo crm status

You should see output similar to the following text:

Output

Stack: corosync

Current DC: sles1 (version 2.0.5+20201202.ba59be712-150300.4.30.3-


2.0.5+20201202.ba59be712) - partition with quorum

Last updated: Mon Mar 6 18:20:17 2023

Last change: Mon Mar 6 18:10:09 2023 by root via cibadmin on sles1

3 nodes configured

2 resource instances configured

Online: [ sles1 sles2 sles3 ]

Full list of resources:

admin-ip (ocf::heartbeat:IPaddr2): Started sles1

rsc_st_azure (stonith:fence_azure_arm): Started sles2

Install SQL Server and mssql-tools


Use the below section to install SQL Server and mssql-tools. For more information, see
Install SQL Server on SUSE Linux Enterprise Server.

Perform these steps on all nodes in this section.

Install SQL Server on the VMs


The following commands are used to install SQL Server:

1. Download the Microsoft SQL Server 2019 SLES repository configuration file:

Bash
sudo zypper addrepo -fc
https://packages.microsoft.com/config/sles/15/mssql-server-2022.repo

2. Refresh your repositories.

Bash

sudo zypper --gpg-auto-import-keys refresh

To ensure that the Microsoft package signing key is installed on your system, use
the following command to import the key:

Bash

sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc

3. Run the following commands to install SQL Server:

Bash

sudo zypper install -y mssql-server

4. After the package installation finishes, run mssql-conf setup and follow the
prompts to set the SA password and choose your edition.

Bash

sudo /opt/mssql/bin/mssql-conf setup

7 Note

Make sure to specify a strong password for the SA account (Minimum length
8 characters, including uppercase and lowercase letters, base 10 digits and/or
non-alphanumeric symbols).

5. Once the configuration is done, verify that the service is running:

Bash

systemctl status mssql-server

Install SQL Server command-line tools


The following steps install the SQL Server command-line tools, namely sqlcmd and bcp.

1. Add the Microsoft SQL Server repository to Zypper.

Bash

sudo zypper addrepo -fc


https://packages.microsoft.com/config/sles/15/prod.repo

2. Refresh your repositories.

Bash

sudo zypper --gpg-auto-import-keys refresh

3. Install mssql-tools with the unixODBC developer package. For more information,
see Install the Microsoft ODBC driver for SQL Server (Linux).

Bash

sudo zypper install -y mssql-tools unixODBC-devel

For convenience, you can add /opt/mssql-tools/bin/ to your PATH environment


variable. This enables you to run the tools without specifying the full path. Run the
following commands to modify the PATH for both login sessions and interactive/non-
login sessions:

Bash

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

source ~/.bashrc

Install SQL Server high availability agent


Run the following command on all nodes to install the high availability agent package
for SQL Server:

Bash

sudo zypper install mssql-server-ha

Open ports for high availability services


1. You can open the following firewall ports on all nodes for SQL Server and HA
services: 1433, 2224, 3121, 5022, 5405, 21064.

Bash

sudo firewall-cmd --zone=public --add-port=1433/tcp --add-port=2224/tcp


--add-port=3121/tcp --add-port=5022/tcp --add-port=5405/tcp --add-
port=21064 --permanent

sudo firewall-cmd --reload

Configure an availability group


Use the following steps to configure a SQL Server Always On availability group for your
VMs. For more information, see Configure SQL Server Always On availability groups for
high availability on Linux

Enable availability groups and restart SQL Server


Enable availability groups on each node that hosts a SQL Server instance. Then restart
the mssql-server service. Run the following commands on each node:

Bash

sudo /opt/mssql/bin/mssql-conf set hadr.hadrenabled 1

Bash

sudo systemctl restart mssql-server

Create a certificate
Microsoft doesn't support Active Directory authentication to the AG endpoint.
Therefore, you must use a certificate for AG endpoint encryption.

1. Connect to all nodes using SQL Server Management Studio (SSMS) or sqlcmd. Run
the following commands to enable an AlwaysOn_health session and create a
master key:

) Important
If you are connecting remotely to your SQL Server instance, you will need to
have port 1433 open on your firewall. You'll also need to allow inbound
connections to port 1433 in your NSG for each VM. For more information, see
Create a security rule for creating an inbound security rule.

Replace the <MasterKeyPassword> with your own password.

SQL

ALTER EVENT SESSION AlwaysOn_health ON SERVER

WITH (STARTUP_STATE = ON);

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<MasterKeyPassword>';

GO

2. Connect to the primary replica using SSMS or sqlcmd. The below commands
create a certificate at /var/opt/mssql/data/dbm_certificate.cer and a private key
at var/opt/mssql/data/dbm_certificate.pvk on your primary SQL Server replica:

Replace the <PrivateKeyPassword> with your own password.

SQL

CREATE CERTIFICATE dbm_certificate

WITH SUBJECT = 'dbm';

GO

BACKUP CERTIFICATE dbm_certificate TO FILE =


'/var/opt/mssql/data/dbm_certificate.cer'

WITH PRIVATE KEY (

FILE = '/var/opt/mssql/data/dbm_certificate.pvk',

ENCRYPTION BY PASSWORD = '<PrivateKeyPassword>'

);

GO

Exit the sqlcmd session by running the exit command, and return back to your SSH
session.

Copy the certificate to the secondary replicas and create


the certificates on the server
1. Copy the two files that were created to the same location on all servers that will
host availability replicas.
On the primary server, run the following scp command to copy the certificate to
the target servers:

Replace <username> and sles2 with the user name and target VM name that
you're using.
Run this command for all secondary replicas.

7 Note

You don't have to run sudo -i , which gives you the root environment. You
can run the sudo command in front of each command instead.

Bash

# The below command allows you to run commands in the root environment

sudo -i

Bash

scp /var/opt/mssql/data/dbm_certificate.*
<username>@sles2:/home/<username>

2. On the target server, run the following command:

Replace <username> with your user name.


The mv command moves the files or directory from one place to another.
The chown command is used to change the owner and group of files,
directories, or links.
Run these commands for all secondary replicas.

Bash

sudo -i

mv /home/<username>/dbm_certificate.* /var/opt/mssql/data/

cd /var/opt/mssql/data

chown mssql:mssql dbm_certificate.*

3. The following Transact-SQL script creates a certificate from the backup that you
created on the primary SQL Server replica. Update the script with strong
passwords. The decryption password is the same password that you used to create
the .pvk file in the previous step. To create the certificate, run the following script
using sqlcmd or SSMS on all secondary servers:
SQL

CREATE CERTIFICATE dbm_certificate

FROM FILE = '/var/opt/mssql/data/dbm_certificate.cer'

WITH PRIVATE KEY (

FILE = '/var/opt/mssql/data/dbm_certificate.pvk',

DECRYPTION BY PASSWORD = '<PrivateKeyPassword>'

);

GO

Create the database mirroring endpoints on all replicas


Run the following script on all SQL Server instances using sqlcmd or SSMS:

SQL

CREATE ENDPOINT [Hadr_endpoint]

AS TCP (LISTENER_PORT = 5022)

FOR DATABASE_MIRRORING (

ROLE = ALL,

AUTHENTICATION = CERTIFICATE dbm_certificate,

ENCRYPTION = REQUIRED ALGORITHM AES

);

GO

ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED;

GO

Create the availability group


Connect to the SQL Server instance that hosts the primary replica using sqlcmd or
SSMS. Run the following command to create the availability group:

Replace ag1 with your desired AG name.


Replace the sles1 , sles2 , and sles3 values with the names of the SQL Server
instances that host the replicas.

SQL

CREATE AVAILABILITY

GROUP [ag1]

WITH (

DB_FAILOVER = ON,

CLUSTER_TYPE = EXTERNAL

FOR REPLICA

ON N'sles1'

WITH (

ENDPOINT_URL = N'tcp://sles1:5022',

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC

),

N'sles2'

WITH (

ENDPOINT_URL = N'tcp://sles2:5022',

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC

),

N'sles3'

WITH (

ENDPOINT_URL = N'tcp://sles3:5022',

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC

);

GO

ALTER AVAILABILITY GROUP [ag1]

GRANT CREATE ANY DATABASE;

GO

Create a SQL Server login for Pacemaker


On all SQL Server instances, create a SQL Server login for Pacemaker. The following
Transact-SQL creates a login.

Replace <password> with your own complex password.

SQL

USE [master]

GO

CREATE LOGIN [pacemakerLogin]

WITH PASSWORD = N'<password>';

GO

ALTER SERVER ROLE [sysadmin]

ADD MEMBER [pacemakerLogin];

GO

On all SQL Server instances, save the credentials used for the SQL Server login.

1. Create the file:


Bash

sudo vi /var/opt/mssql/secrets/passwd

2. Add the following two lines to the file:

Bash

pacemakerLogin

<password>

To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit.

3. Make the file only readable by root:

Bash

sudo chown root:root /var/opt/mssql/secrets/passwd

sudo chmod 400 /var/opt/mssql/secrets/passwd

Join secondary replicas to the availability group


1. On your secondary replicas, run the following commands to join them to the AG:

SQL

ALTER AVAILABILITY GROUP [ag1] JOIN WITH (CLUSTER_TYPE = EXTERNAL);

GO

ALTER AVAILABILITY GROUP [ag1] GRANT CREATE ANY DATABASE;

GO

2. Run the following Transact-SQL script on the primary replica and each secondary
replica:

SQL

GRANT ALTER, CONTROL, VIEW DEFINITION

ON AVAILABILITY GROUP::ag1 TO pacemakerLogin;

GO

GRANT VIEW SERVER STATE TO pacemakerLogin;

GO

3. Once the secondary replicas are joined, you can see them in SSMS Object Explorer
by expanding the Always On High Availability node:

Add a database to the availability group


This section follows the article for adding a database to an availability group.

The following Transact-SQL commands are used in this step. Run these commands on
the primary replica:

SQL

CREATE DATABASE [db1]; -- creates a database named db1

GO

ALTER DATABASE [db1] SET RECOVERY FULL; -- set the database in full recovery
mode

GO

BACKUP DATABASE [db1] -- backs up the database to disk

TO DISK = N'/var/opt/mssql/data/db1.bak';

GO

ALTER AVAILABILITY GROUP [ag1] ADD DATABASE [db1]; -- adds the database db1
to the AG

GO

Verify that the database is created on the secondary


servers
On each secondary SQL Server replica, run the following query to see if the db1
database was created and is in a SYNCHRONIZED state:

SQL

SELECT * FROM sys.databases

WHERE name = 'db1';

GO

SELECT DB_NAME(database_id) AS 'database',

synchronization_state_desc

FROM sys.dm_hadr_database_replica_states;

GO

If the synchronization_state_desc lists SYNCHRONIZED for db1 , this means the replicas
are synchronized. The secondaries are showing db1 in the primary replica.

Create availability group resources in the


Pacemaker cluster

7 Note

Bias-free communication

This article contains references to the term slave, a term Microsoft considers
offensive when used in this context. The term appears in this article because it
currently appears in the software. When the term is removed from the software, we
will remove it from the article.

This article references the guide to create the availability group resources in a
Pacemaker cluster.

Enable Pacemaker
Enable Pacemaker so that it automatically starts.

Run the following command on all nodes in the cluster.

Bash
sudo systemctl enable pacemaker

Create the AG cluster resource


1. Run crm configure to open the crm prompt:

Bash

sudo crm configure

2. In the crm prompt, run the following command to configure the resource
properties. The following commands create the resource ag_cluster in the
availability group ag1 .

Bash

primitive ag_cluster ocf:mssql:ag params ag_name="ag1" meta failure-


timeout=60s op start timeout=60s op stop timeout=60s op promote
timeout=60s op demote timeout=10s op monitor timeout=60s interval=10s
op monitor timeout=60s interval=11s role="Master" op monitor
timeout=60s interval=12s role="Slave" op notify timeout=60s ms ms-
ag_cluster ag_cluster meta master-max="1" master-node-max="1" clone-
max="3" clone-node-max="1" notify="true"

commit

quit

 Tip

Type quit to exit from the crm prompt.

3. Set the co-location constraint for the virtual IP, to run on the same node as the
primary node:

Bash

sudo crm configure

colocation vip_on_master inf: admin-ip ms-ag_cluster: Master

commit

quit

4. Add the ordering constraint, to prevent the IP address from temporarily pointing
to the node with the pre-failover secondary. Run the following command to create
ordering constraint:
Bash

sudo crm configure

order ag_first inf: ms-ag_cluster:promote admin-ip:start

commit

quit

5. Check the status of the cluster using the command:

Bash

sudo crm status

The output should be similar to the following example:

Output

Cluster Summary:

Stack: corosync

Current DC: sles1 (version 2.0.5+20201202.ba59be712-150300.4.30.3-


2.0.5+20201202.ba59be712) - partition with quorum

Last updated: Mon Mar 6 18:38:17 2023

Last change: Mon Mar 6 18:38:09 2023 by root via cibadmin on sles1

3 nodes configured

5 resource instances configured

Node List:

Online: [ sles1 sles2 sles3 ]

Full List of Resources:

admin-ip (ocf::heartbeat:IPaddr2): Started sles1

rsc_st_azure (stonith:fence_azure_arm): Started sles2

Clone Set: ms-ag_cluster [ag_cluster] (promotable):

Masters: [ sles1 ]

Slaves: [ sles2 sles3 ]

6. Run the following command to review the constraints:

Bash

sudo crm configure show

The output should be similar to the following example:

Output
node 1: sles1

node 2: sles2

node 3: sles3

primitive admin-ip IPaddr2 \

params ip=10.0.0.93 \

op monitor interval=10 timeout=20

primitive ag_cluster ocf:mssql:ag \

params ag_name=ag1 \

meta failure-timeout=60s \

op start timeout=60s interval=0 \

op stop timeout=60s interval=0 \

op promote timeout=60s interval=0 \

op demote timeout=10s interval=0 \

op monitor timeout=60s interval=10s \

op monitor timeout=60s interval=11s role=Master \

op monitor timeout=60s interval=12s role=Slave \

op notify timeout=60s interval=0

primitive rsc_st_azure stonith:fence_azure_arm \

params subscriptionId=xxxxxxx resourceGroup=amvindomain


tenantId=xxxxxxx login=xxxxxxx passwd="******" cmk_monitor_retries=4
pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900
pcmk_host_map="sles1:sles1;les2:sles2;sles3:sles3" \

op monitor interval=3600 timeout=120

ms ms-ag_cluster ag_cluster \

meta master-max=1 master-node-max=1 clone-max=3 clone-node-


max=1 notify=true

order ag_first Mandatory: ms-ag_cluster:promote admin-ip:start

colocation vip_on_master inf: admin-ip ms-ag_cluster:Master

property cib-bootstrap-options: \
have-watchdog=false \

dc-version="2.0.5+20201202.ba59be712-150300.4.30.3-
2.0.5+20201202.ba59be712" \

cluster-infrastructure=corosync \

cluster-name=sqlcluster \
stonith-enabled=true \

concurrent-fencing=true \
stonith-timeout=900

rsc_defaults rsc-options: \

resource-stickiness=1 \

migration-threshold=3

op_defaults op-options: \

timeout=600 \

record-pending=true

Test failover
To ensure that the configuration has succeeded so far, test a failover. For more
information, see Always On availability group failover on Linux.
1. Run the following command to manually fail over the primary replica to sles2 .
Replace sles2 with the value of your server name.

Bash

sudo crm resource move ag_cluster sles2

The output should be similar to the following example:

Output

INFO: Move constraint created for ms-ag_cluster to sles2

INFO: Use `crm resource clear ms-ag_cluster` to remove this constraint

2. Check the status of the cluster:

Bash

sudo crm status

The output should be similar to the following example:

Output

Cluster Summary:

Stack: corosync

Current DC: sles1 (version 2.0.5+20201202.ba59be712-150300.4.30.3-


2.0.5+20201202.ba59be712) - partition with quorum

Last updated: Mon Mar 6 18:40:02 2023

Last change: Mon Mar 6 18:39:53 2023 by root via crm_resource on


sles1

3 nodes configured

5 resource instances configured

Node List:

Online: [ sles1 sles2 sles3 ]

Full List of Resources:

admin-ip (ocf::heartbeat:IPaddr2): Stopped

rsc_st_azure (stonith:fence_azure_arm): Started sles2

Clone Set: ms-ag_cluster [ag_cluster] (promotable):

Slaves: [ sles1 sles2 sles3 ]

3. After some time, the sles2 VM is now the primary, and the other two VMs are
secondaries. Run sudo crm status once again, and review the output, which is
similar to the following example:
Output

Cluster Summary:

Stack: corosync

Current DC: sles1 (version 2.0.5+20201202.ba59be712-150300.4.30.3-


2.0.5+20201202.ba59be712) - partition with quorum

Last updated: Tue Mar 6 22:00:44 2023

Last change: Mon Mar 6 18:42:59 2023 by root via cibadmin on sles1

3 nodes configured

5 resource instances configured

Node List:

Online: [ sles1 sles2 sles3 ]

Full List of Resources:

admin-ip (ocf::heartbeat:IPaddr2): Started sles2

rsc_st_azure (stonith:fence_azure_arm): Started sles2

Clone Set: ms-ag_cluster [ag_cluster] (promotable):

Masters: [ sles2 ]

Slaves: [ sles1 sles3 ]

4. Check your constraints again, using crm config show . Observe that another
constraint was added because of the manual failover.

5. Remove the constraint with ID cli-prefer-ag_cluster , using the following


command:

Bash

crm configure

delete cli-prefer-ms-ag_cluster

commit

Test fencing
You can test STONITH by running the following command. Try running the below
command from sles1 for sles3 .

Bash

sudo crm node fence sles3

See also
Tutorial: Configure an availability group listener for SQL Server on RHEL virtual
machines in Azure
Tutorial: Configure an availability group
listener for SQL Server on RHEL virtual
machines in Azure
Article • 11/04/2022

Applies to:
SQL Server on Azure VM

7 Note

The tutorial presented is in public preview.

We use SQL Server 2017 with RHEL 7.6 in this tutorial, but it is possible to use SQL
Server 2019 in RHEL 7 or RHEL 8 to configure high availability. The commands to
configure availability group resources has changed in RHEL 8, and you'll want to
look at the article Create availability group resource and RHEL 8 resources for
more information on the correct commands.

This tutorial will go over steps on how to create an availability group listener for your
SQL Servers on RHEL virtual machines (VMs) in Azure. You will learn how to:

" Create a load balancer in the Azure portal


" Configure the back-end pool for the load balancer
" Create a probe for the load balancer
" Set the load balancing rules
" Create the load balancer resource in the cluster
" Create the availability group listener
" Test connecting to the listener
" Testing a failover

Prerequisite
Completed Tutorial: Configure availability groups for SQL Server on RHEL virtual
machines in Azure

Create the load balancer in the Azure portal


The following instructions take you through steps 1 through 4 from the Create and
configure the load balancer in the Azure portal section of the Load balancer - Azure
portal article.

Create the load balancer


1. In the Azure portal, open the resource group that contains the SQL Server virtual
machines.

2. In the resource group, click Add.

3. Search for load balancer and then, in the search results, select Load Balancer,
which is published by Microsoft.

4. On the Load Balancer blade, click Create.

5. In the Create load balancer dialog box, configure the load balancer as follows:

Setting Value

Name A text name representing the load balancer. For example, sqlLB.

Type Internal

Virtual network The default virtual network that was created should be named
VM1VNET.

Subnet Select the subnet that the SQL Server instances are in. The default
should be VM1Subnet.

IP address Static
assignment

Private IP Use the virtualip IP address that was created in the cluster.
address

Subscription Use the subscription that was used for your resource group.

Resource group Select the resource group that the SQL Server instances are in.

Location Select the Azure location that the SQL Server instances are in.

Configure the back-end pool


Azure calls the back-end address pool backend pool. In this case, the back-end pool is
the addresses of the three SQL Server instances in your availability group.

1. In your resource group, click the load balancer that you created.

2. On Settings, click Backend pools.


3. On Backend pools, click Add to create a back-end address pool.

4. On Add backend pool, under Name, type a name for the back-end pool.

5. Under Associated to, select Virtual machine.

6. Select each virtual machine in the environment, and associate the appropriate IP
address to each selection.
7. Click Add.

Create a probe
The probe defines how Azure verifies which of the SQL Server instances currently owns
the availability group listener. Azure probes the service based on the IP address on a
port that you define when you create the probe.

1. On the load balancer Settings blade, click Health probes.

2. On the Health probes blade, click Add.

3. Configure the probe on the Add probe blade. Use the following values to
configure the probe:

Setting Value

Name A text name representing the probe. For example,


SQLAlwaysOnEndPointProbe.

Protocol TCP

Port You can use any available port. For example, 59999.

Interval 5

Unhealthy 2
threshold

4. Click OK.

5. Log in to all your virtual machines, and open the probe port using the following
commands:

Bash

sudo firewall-cmd --zone=public --add-port=59999/tcp --permanent

sudo firewall-cmd --reload

Azure creates the probe and then uses it to test which SQL Server instance has the
listener for the availability group.

Set the load-balancing rules


The load-balancing rules configure how the load balancer routes traffic to the SQL
Server instances. For this load balancer, you enable direct server return because only
one of the three SQL Server instances owns the availability group listener resource at a
time.

1. On the load balancer Settings blade, click Load balancing rules.


2. On the Load balancing rules blade, click Add.

3. On the Add load balancing rules blade, configure the load-balancing rule. Use the
following settings:

Setting Value

Name A text name representing the load-balancing rules. For example,


SQLAlwaysOnEndPointListener.

Protocol TCP

Port 1433

Backend port 1433. This value is ignored because this rule uses Floating IP
(direct server return).

Probe Use the name of the probe that you created for this load balancer.

Session persistence None

Idle timeout 4
(minutes)

Floating IP (direct Enabled


server return)
4. Click OK.

5. Azure configures the load-balancing rule. Now the load balancer is configured to
route traffic to the SQL Server instance that hosts the listener for the availability
group.

At this point, the resource group has a load balancer that connects to all SQL Server
machines. The load balancer also contains an IP address for the SQL Server Always On
availability group listener, so that any machine can respond to requests for the
availability groups.
Create the load balancer resource in the cluster
1. Log in to the primary virtual machine. We need to create the resource to enable
the Azure load balancer probe port (59999 is used in our example). Run the
following command:

Bash

sudo pcs resource create azure_load_balancer azure-lb port=59999

2. Create a group that contains the virtualip and azure_load_balancer resource:

Bash

sudo pcs resource group add virtualip_group azure_load_balancer


virtualip

Add constraints
1. A colocation constraint must be configured to ensure the Azure load balancer IP
address and the AG resource are running on the same node. Run the following
command:

Bash

sudo pcs constraint colocation add azure_load_balancer ag_cluster-


master INFINITY with-rsc-role=Master

2. Create an ordering constraint to ensure that the AG resource is up and running


before the Azure load balancer IP address. While the colocation constraint implies
an ordering constraint, this enforces it.

Bash

sudo pcs constraint order promote ag_cluster-master then start


azure_load_balancer

3. To verify the constraints, run the following command:

Bash

sudo pcs constraint list --full

You should see the following output:

Output

Location Constraints:

Ordering Constraints:

promote ag_cluster-master then start virtualip (kind:Mandatory)


(id:order-ag_cluster-master-virtualip-mandatory)

promote ag_cluster-master then start azure_load_balancer


(kind:Mandatory) (id:order-ag_cluster-master-azure_load_balancer-
mandatory)

Colocation Constraints:

virtualip with ag_cluster-master (score:INFINITY) (with-rsc-


role:Master) (id:colocation-virtualip-ag_cluster-master-INFINITY)

azure_load_balancer with ag_cluster-master (score:INFINITY) (with-


rsc-role:Master) (id:colocation-azure_load_balancer-ag_cluster-master-
INFINITY)

Ticket Constraints:

Create the availability group listener


1. On the primary node, run the following command in SQLCMD or SSMS:

Replace the IP address used below with the virtualip IP address.

SQL

ALTER AVAILABILITY

GROUP [ag1] ADD LISTENER 'ag1-listener' (

WITH IP(('10.0.0.7' ,'255.255.255.0'))

,PORT = 1433

);

GO

2. Log in to each VM node. Use the following command to open the hosts file and
set up host name resolution for the ag1-listener on each machine.

sudo vi /etc/hosts

In the vi editor, enter i to insert text, and on a blank line, add the IP of the ag1-
listener . Then add ag1-listener after a space next to the IP.

Output
<IP of ag1-listener> ag1-listener

To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit. Do this on each node.

Test the listener and a failover

Test logging in to SQL Server using the availability group


listener
1. Use SQLCMD to log in to the primary node of SQL Server using the availability
group listener name:

Use a login that was previously created and replace <YourPassword> with the
correct password. The example below uses the sa login that was created with
the SQL Server.

Bash

sqlcmd -S ag1-listener -U sa -P <YourPassword>

2. Check the name of the server that you are connected to. Run the following
command in SQLCMD:

SQL

SELECT @@SERVERNAME

Your output should show the current primary node. This should be VM1 if you have
never tested a failover.

Exit the SQL Server session by typing the exit command.

Test a failover
1. Run the following command to manually fail over the primary replica to <VM2> or
another replica. Replace <VM2> with the value of your server name.

Bash
sudo pcs resource move ag_cluster-master <VM2> --master

2. If you check your constraints, you'll see that another constraint was added because
of the manual failover:

Bash

sudo pcs constraint list --full

You will see that a constraint with ID cli-prefer-ag_cluster-master was added.

3. Remove the constraint with ID cli-prefer-ag_cluster-master using the following


command:

Bash

sudo pcs constraint remove cli-prefer-ag_cluster-master

4. Check your cluster resources using the command sudo pcs resource , and you
should see that the primary instance is now <VM2> .

7 Note

This article contains references to the term slave, a term that Microsoft no
longer uses. When the term is removed from the software, we'll remove it
from this article.

Output

[<username>@<VM1> ~]$ sudo pcs resource

Master/Slave Set: ag_cluster-master [ag_cluster]

Masters: [ <VM2> ]

Slaves: [ <VM1> <VM3> ]

Resource Group: virtualip_group

azure_load_balancer (ocf::heartbeat:azure-lb): Started


<VM2>

virtualip (ocf::heartbeat:IPaddr2): Started <VM2>

5. Use SQLCMD to log in to your primary replica using the listener name:

Use a login that was previously created and replace <YourPassword> with the
correct password. The example below uses the sa login that was created with
the SQL Server.

Bash

sqlcmd -S ag1-listener -U sa -P <YourPassword>

6. Check the server that you are connected to. Run the following command in
SQLCMD:

SQL

SELECT @@SERVERNAME

You should see that you are now connected to the VM that you failed-over to.

Next steps
For more information on load balancers in Azure, see:

Configure a load balance for an availability group on SQL Server on Azure VMs
Tutorial: Set up a three node Always On
availability group with DH2i
DxEnterprise
Article • 02/13/2023

Applies to:
SQL Server on Azure VM

This tutorial explains how to configure an SQL Server Always On availability group with
DH2i DxEnterprise running on Linux-based Azure Virtual Machines (VMs).

For more information about DxEnterprise, see DH2i DxEnterprise .

7 Note

Microsoft supports data movement, availability groups, and the SQL Server
components. Contact DH2i for support related to the documentation of DH2i
DxEnterprise cluster, for the cluster and quorum management.

In this tutorial, you'll set up a DxEnterprise cluster using DxAdmin Client UI .


Optionally, you can also set up the cluster using the DxCLI command-line interface.
For this example, we've used four VMs. Three of those VMs are running Ubuntu 18.04,
and are part of the three node cluster. The fourth VM is running Windows 10 with the
DxAdmin tool to manage and configure the cluster.

This tutorial consists of the following steps:

" Install SQL Server on all virtual machines that will be part of the availability group.
" Install DxEnterprise on all the virtual machines and configure the DxEnterprise
cluster.
" Create the virtual hosts to provide failover support and high availability and add an
availability group and database to the availability group.
" Create the internal Azure Load Balancer for availability group listener (optional).
" Perform a manual or automatic failover.

Prerequisites
Create four virtual machines in Azure. Follow the Quickstart: Create Linux virtual
machine in Azure portal article to create Linux based virtual machines. Similarly, for
creating the Windows based virtual machine, follow the Quickstart: Create a
Windows virtual machine in the Azure portal article.
Install .NET 3.1 on all the Linux-based VMs that are going to be part of the cluster.
For instructions for the Linux operating system that you choose, see Install .NET on
Linux distributions.
A valid DxEnterprise license with availability group management features enabled
is required. For more information, see DxEnterprise Free Trial for a free trial.

Install SQL Server on Azure VMs in the


availability group
In this tutorial, you create a three node Linux-based cluster running the availability
group. Follow the documentation for SQL Server installation on Linux based on the
choice of your Linux platform. We also recommend you install the SQL Server tools for
this tutorial.

7 Note

Ensure that the Linux OS that you choose is a common distribution that is
supported by both DH2i DxEnterprise, Minimal System Requirements and
Microsoft SQL Server.

This tutorial uses Ubuntu 18.04, which is supported by both DH2i DxEnterprise and
Microsoft SQL Server.

For this tutorial, don't install SQL Server on the Windows VM, because this node isn't
going to be part of the cluster, and is used only to manage the cluster using DxAdmin.

After you complete this step, you should have SQL Server and SQL Server tools
(optionally) installed on all three Linux-based VMs that participate in the availability
group.
 

Install DxEnterprise on VMs and Configure the


cluster
In this step, install DH2i DxEnterprise for Linux on the three Linux VMs. The following
table describes the role each server plays in the cluster:
Number of DH2i DxEnterprise role Microsoft SQL Server availability group replica
VMs role

1 Cluster node - Linux Primary


based

1 Cluster node - Linux Secondary - Synchronous commit


based

1 Cluster node - Linux Secondary - Synchronous commit


based

1 DxAdmin Client NA

To install DxEnterprise on the three Linux-based nodes, follow the DH2i DxEnterprise
documentation based on the Linux operating system you choose. Install DxEnterprise
using any one of the methods listed below.

Ubuntu
Repo Installation Quick Start Guide
Extension Quick Start Guide
Marketplace Image Quick Start Guide
RHEL
Repo Installation Quick Start Guide
Extension Quick Start Guide
Marketplace Image Quick Start Guide

To install just the DxAdmin client tool on the Windows VM, follow DxAdmin Client UI
Quick Start Guide .

After this step, you should have the DxEnterprise cluster created on the Linux VMs, and
DxAdmin client installed on the Windows Client machine.

7 Note

You can also create a three node cluster where one of the node is added as
configuration-only mode to enable automatic failover. For more information, see
Supported Availability Modes.

Create the virtual hosts for failover support and


high availability
In this step, you create a virtual host, availability group, and then add a database, all
using the DxAdmin UI.

7 Note

During this step, the SQL Server instances are restarted to enable availability
groups.

Connect to the Windows client machine running DxAdmin to connect to the cluster
created in the step above. Follow the steps documented at MSSQL Availability Groups
with DxAdmin to enable Always On and create the virtual host and availability group.

 Tip

Before adding the databases, ensure the database is created and backed up on the
primary instance of SQL Server.

Create the internal Azure Load Balancer for


listener (optional)
In this optional step, you can create and configure the Azure Load balancer that holds
the IP addresses for the availability group listeners. For more information on Load
Balancer, see Azure Load Balancer. To configure the Load Balancer and availability group
listener using DxAdmin, see Azure Load Balancer Quick Start Guide .

After this step, you should have an availability group listener created and mapped to the
internal load balancer.

Test manual or automatic failover


For the automatic failover test, bring down the primary replica by turning off the virtual
machine from the Azure portal. This test replicates the sudden unavailability of the
primary node. The expected behavior is:

The cluster manager promotes one of the secondary replicas in the availability
group to primary.
The failed primary replica automatically joins the cluster after comes back up. The
cluster manager promotes it to secondary replica.

You could also perform a manual failover by following the following steps:
1. Connect to the cluster by using DxAdmin.
2. Expand the virtual host for the availability group.
3. Right-click on the target node/secondary replica and select Start Hosting on
Member to initiate the failover.

For more information on more operations within DxEnterprise, See DxEnterprise Admin
Guide and DxEnterprise DxCLI Guide .

Next Steps
Learn more about Availability Groups on Linux
Quickstart: Create Linux virtual machine in Azure portal
Quickstart: Create a Windows virtual machine in the Azure portal
Supported platforms for SQL Server 2019 on Linux
Frequently asked questions for
SQL Server on Linux virtual
machines
FAQ

Applies to: SQL Server on Azure VM

This article provides answers to some of the most common questions about running
SQL Server on Linux virtual machines.

If your Azure issue is not addressed in this article, visit the Azure forums on Microsoft Q
& A and Stack Overflow . You can post your issue in these forums, or post to
@AzureSupport on Twitter . You also can submit an Azure support request. To submit a
support request, on the Azure support page, select Get support.

Images
What SQL Server virtual machine gallery images
are available?
Azure maintains virtual machine (VM) images for all supported major releases of SQL
Server on all editions for both Linux and Windows. For more details, see the complete
list of Linux VM images and Windows VM images.

Are existing SQL Server virtual machine gallery


images updated?
Every two months, SQL Server images in the virtual machine gallery are updated with
the latest Linux and Windows updates. For Linux images, this includes the latest system
updates. For Windows images, this includes any updates that are marked as important
in Windows Update, including important SQL Server security updates and service packs.
SQL Server cumulative updates are handled differently for Linux and Windows. For
Linux, SQL Server cumulative updates are also included in the refresh. But at this time,
Windows VMs are not updated with SQL Server or Windows Server cumulative updates.
What related SQL Server packages are also
installed?
To see the SQL Server packages that are installed by default on SQL Server on Linux
VMs, see Installed packages.

Can SQL Server virtual machine images get


removed from the gallery?
Yes. Azure only maintains one image per major version and edition. For example, when a
new SQL Server service pack is released, Azure adds a new image to the gallery for that
service pack. The SQL Server image for the previous service pack is immediately
removed from the Azure portal. However, it is still available for provisioning from
PowerShell for the next three months. After three months, the previous service pack
image is no longer available. This removal policy would also apply if a SQL Server
version becomes unsupported when it reaches the end of its lifecycle.

Creation
How do I create a Linux virtual machine with SQL
Server?
The easiest solution is to create a Linux virtual machine that includes SQL Server. For a
tutorial on signing up for Azure and creating a SQL Server VM from the portal, see
Provision a Linux virtual machine running SQL Server in the Azure portal. You also have
the option of manually installing SQL Server on a VM with either a freely licensed edition
(Developer or Express) or by reusing an on-premises license. If you bring your own
license, you must have License Mobility through Software Assurance on Azure .

Why can't I provision an RHEL or SLES SQL


Server VM with an Azure subscription that has a
spending limit?
RHEL and SLES virtual machines require a subscription with no spending limit and a
verified payment method (usually a credit card) associated with the subscription. If you
provision an RHEL or SLES VM without removing the spending limit, your subscription
will get disabled and all VMs/services stopped. If you do run into this state, to re-enable
the subscription remove the spending limit . Your remaining credits will be restored for
the current billing cycle but an RHEL or SLES VM image surcharge will go against your
credit card if you choose to re-start and continue running it.

Licensing
How can I install my licensed copy of SQL Server
on an Azure VM?
First, create a Linux OS-only virtual machine. Then run the SQL Server installation steps
for your Linux distribution. Unless you are installing one of the freely licensed editions of
SQL Server, you must also have a SQL Server license and License Mobility through
Software Assurance on Azure .

Are there Bring-Your-Own-License (BYOL) Linux


virtual machine images for SQL Server?
At this time, there are no BYOL Linux virtual machine images for SQL Server. However,
you can manually install SQL Server on a Linux-only VM as discussed in the previous
questions.

Can I change a VM to use my own SQL Server


license if it was created from one of the pay-as-
you-go gallery images?
No. You cannot switch from pay-per-second licensing to using your own license. You
must create a new Linux VM, install SQL Server, and migrate your data. See the previous
question for more details about bringing your own license.

Administration
Can I manage a Linux virtual machine running
SQL Server with SQL Server Management Studio
(SSMS)?
Yes, but SSMS is currently a Windows-only tool. You must connect remotely from a
Windows machine to use SSMS with Linux VMs running SQL Server. Locally on Linux, the
new mssql-conf tool can perform many administrative tasks. For a cross-platform
database management tool, see Azure Data Studio.

Can I remove SQL Server completely from a SQL


Server VM?
Yes, but you will continue to be charged for your SQL Server VM as described in Pricing
guidance for SQL Server Azure VMs. If you no longer need SQL Server, you can deploy a
new virtual machine and migrate the data and applications to the new virtual machine.
Then you can remove the SQL Server virtual machine.

Updating and patching


How do I upgrade to a new version/edition of
the SQL Server in an Azure VM?
Currently, there is no in-place upgrade for SQL Server running in an Azure VM. Create a
new Azure virtual machine with the desired SQL Server version/edition, and then
migrate your databases to the new server using standard data migration techniques.

General
Are SQL Server high-availability solutions
supported on Azure VMs?
Not at this time. Always On availability groups and Failover Clustering both require a
clustering solution in Linux, such as Pacemaker. The supported Linux distributions for
SQL Server do not support their high availability add-ons in the cloud.

Resources
Linux VMs:

Overview of SQL Server on a Linux VM


Provision SQL Server on a Linux VM
SQL Server on Linux documentation

Windows VMs:
Overview of SQL Server on a Windows VM
Provision SQL Server on a Windows VM
FAQ (Windows)
SQL Server on Linux
Article • 03/31/2023

Applies to:
SQL Server - Linux

SQL Server 2022 (16.x) runs on Linux. It's the same SQL Server database engine, with
many similar features and services regardless of your operating system. To find out more
about this release, see What's new in SQL Server 2022.

Install
To get started, install SQL Server on Linux using one of the following quickstarts:

Install on Red Hat Enterprise Linux


Install on SUSE Linux Enterprise Server
Install on Ubuntu
Install containers for SQL Server on Linux
Provision a SQL VM in Azure

Container images
The SQL Server container images are published and available on the Microsoft Container
Registry (MCR), and also cataloged at the following locations, based on the operating
system image that was used when creating the container image:

For RHEL-based SQL Server container images, see SQL Server Red Hat
Containers .
For Ubuntu-based SQL Server images, see SQL Server on Docker Hub .

7 Note

Containers will only be published to MCR for the most recent Linux distributions. If
you create your own custom SQL Server container image for an older supported
distribution, it will still be supported. For more information, see Upcoming updates
to SQL Server container images on Microsoft Artifact Registry aka (MCR) .

Connect
After installation, connect to the SQL Server instance on your Linux machine. You can
connect locally or remotely and with various tools and drivers. The quickstarts
demonstrate how to use the sqlcmd command-line tool. Other tools include the
following:

Tool Tutorial

Visual Studio Code (VS Code) Use VS Code with SQL Server on Linux

SQL Server Management Studio Use SSMS on Windows to connect to SQL Server on
(SSMS) Linux

SQL Server Data Tools (SSDT) Use SSDT with SQL Server on Linux

Explore
Starting with SQL Server 2017 (14.x), SQL Server has the same underlying database
engine on all supported platforms, including Linux and containers. Therefore, many
existing features and capabilities operate the same way. This area of the documentation
exposes some of these features from a Linux perspective. It also calls out areas that have
unique requirements on Linux.

If you're already familiar with SQL Server on Linux, review the release notes for general
guidelines and known issues for this release:

SQL Server 2017 release notes


SQL Server 2019 release notes
SQL Server 2022 release notes

Then look at what's new:

What's new for SQL Server 2017


What's new for SQL Server 2019 on Linux
What's new in SQL Server 2022

 Tip

For answers to frequently asked questions, see the SQL Server on Linux FAQ.


Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback


Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


Download SQL Server Data Tools (SSDT)
for Visual Studio
Article • 07/07/2023

Applies to:
SQL Server
Azure SQL Database
Azure Synapse Analytics

SQL Server Data Tools (SSDT) is a modern development tool for building SQL Server
relational databases, databases in Azure SQL, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, you
can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.

SSDT for Visual Studio 2022

Changes in SSDT for Visual Studio 2022


The core SSDT functionality to create database projects has remained integral to Visual
Studio.

7 Note

There's no SSDT standalone installer for Visual Studio 2022.

Install SSDT with Visual Studio 2022


If Visual Studio 2022 is already installed, you can edit the list of workloads to include
SSDT. If you don't have Visual Studio 2022 installed, then you can download and install
Visual Studio 2022 .

To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.

1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.

3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.

For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .

Analysis Services
Integration Services
Reporting Services

Supported SQL versions in Visual Studio 2022


Project Templates SQL Platforms Supported

Relational databases SQL Server 2016 (13.x) - SQL Server 2022 (16.x)

Azure SQL Database, Azure SQL Managed Instance

Azure Synapse Analytics (dedicated pools only)

Analysis Services models


SQL Server 2016 - SQL Server 2022

Reporting Services reports

Integration Services packages SQL Server 2019 - SQL Server 2022

License terms for Visual Studio


To understand the license terms and use cases for Visual Studio, refer to (Visual Studio
License Directory)[https://visualstudio.microsoft.com/license-terms/]. For example, if you
are using the Community Edition of Visual Studio for SQL Server Data Tools, review the
EULA for that specific edition of Visual Studio in the Visual Studio License Directory.

SSDT for Visual Studio 2019

Changes in SSDT for Visual Studio 2019


The core SSDT functionality to create database projects has remained integral to Visual
Studio.

With Visual Studio 2019, the required functionality to enable Analysis Services,
Integration Services, and Reporting Services projects has moved into the respective
Visual Studio (VSIX) extensions only.

7 Note

There's no SSDT standalone installer for Visual Studio 2019.

Install SSDT with Visual Studio 2019


If Visual Studio 2019 is already installed, you can edit the list of workloads to include
SSDT. If you don't have Visual Studio 2019 installed, then you can download and install
Visual Studio 2019 Community .
To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.

1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".

2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.

3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.

For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .

Analysis Services
Integration Services
Reporting Services

Supported SQL versions in Visual Studio 2019

Project Templates SQL Platforms Supported

Relational databases SQL Server 2012 - SQL Server 2019

Azure SQL Database, Azure SQL Managed Instance

Azure Synapse Analytics (dedicated pools only)

Analysis Services models


SQL Server 2008 - SQL Server 2019

Reporting Services reports

Integration Services packages SQL Server 2012 - SQL Server 2022

Offline installation
For scenarios where offline installation is required, such as low bandwidth or isolated
networks, SSDT is available for offline installation. Two approaches are available:

For a single machine, Download All, then install


For installation on one or more machines, use the Visual Studio bootstrapper from
the command line

For more details you can follow the Step-by-Step Guidelines for Offline Installation

Previous versions
To download and install SSDT for Visual Studio 2017, or an older version of SSDT, see
Previous releases of SQL Server Data Tools (SSDT and SSDT-BI).

See Also
SSDT MSDN Forum

SSDT Team Blog

DACFx API Reference

Download SQL Server Management Studio (SSMS)


Next steps
After installation of SSDT, work through these tutorials to learn how to create databases,
packages, data models, and reports using SSDT.

Project-Oriented Offline Database Development

SSIS Tutorial: Create a Simple ETL Package

Analysis Services tutorials

Create a Basic Table Report (SSRS Tutorial)


Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback


Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


SQL tools overview
Article • 04/03/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics
Analytics Platform System (PDW)

To manage your database, you need a tool. Whether your databases run in the cloud, on
Windows, on macOS, or on Linux, your tool doesn't need to run on the same platform as
the database.

You can view the links to the different SQL tools in the following tables.

7 Note

To download SQL Server, see Install SQL Server.

Recommended tools
The following tools provide a graphical user interface (GUI).

Tool Description Operating


system

A light-weight editor that can run on-demand SQL queries, view and Windows

save results as text, JSON, or Excel. Edit data, organize your favorite macOS

database connections, and browse database objects in a familiar Linux


object browsing experience.

Azure Data
Studio

Manage a SQL Server instance or database with full GUI support. Windows
Access, configure, manage, administer, and develop all components
of SQL Server, Azure SQL Database, and Azure Synapse Analytics.
Provides a single comprehensive utility that combines a broad
SQL Server group of graphical tools with a number of rich script editors to
Management provide access to SQL for developers and database administrators
Studio of all skill levels.
(SSMS)
Tool Description Operating
system

A modern development tool for building SQL Server relational Windows


databases, Azure SQL databases, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS)
SQL Server reports. With SSDT, you can design and deploy any SQL Server
Data Tools content type with the same ease as you would develop an
(SSDT) application in Visual Studio .

The mssql extension for Visual Studio Code is the official SQL Windows

Server extension that supports connections to SQL Server and rich macOS

editing experience for T-SQL in Visual Studio Code. Write T-SQL Linux
scripts in a light-weight editor.

Visual Studio
Code

Command-line tools
The tools below are the main command-line tools.

Tool Description Operating


system

bcp The bulk copy program utility (bcp) bulk copies data between an Windows

instance of Microsoft SQL Server and a data file in a user-specified macOS

format. Linux

mssql-cli mssql-cli is an interactive command-line tool for querying SQL Server. Windows

(preview) Also, query SQL Server with a command-line tool that features macOS

IntelliSense, syntax high-lighting, and more. Linux

mssql-conf mssql-conf configures SQL Server running on Linux. Linux

mssql- mssql-scripter is a multi-platform command-line experience for Windows

scripter scripting SQL Server databases. macOS

(preview) Linux

sqlcmd sqlcmd utility lets you enter Transact-SQL statements, system Windows

procedures, and script files at the command prompt. macOS

Linux

sqlpackage sqlpackage is a command-line utility that automates several database Windows

development tasks. macOS

Linux
Tool Description Operating
system

SQL Server SQL Server PowerShell provides cmdlets for working with SQL. Windows

PowerShell macOS

Linux

Migration and other tools


These tools are used to migrate, configure, and provide other features for SQL
databases.

Tool Description

Configuration Use SQL Server Configuration Manager to configure SQL Server services and
Manager configure network connectivity. Configuration Manager runs on Windows

Database Use Database Experimentation Assistant to evaluate a targeted version of SQL


Experimentation for a given workload.
Assistant

Data Migration The Data Migration Assistant tool helps you upgrade to a modern data
Assistant platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.

Distributed Use the Distributed Replay feature to help you assess the impact of future SQL
Replay Server upgrades. Also use Distributed Replay to help assess the impact of
hardware and operating system upgrades, and SQL Server tuning.

ssbdiagnose The ssbdiagnose utility reports issues in Service Broker conversations or the
configuration of Service Broker services.

SQL Server Use SQL Server Migration Assistant to automate database migration to SQL
Migration Server from Microsoft Access, DB2, MySQL, Oracle, and Sybase.
Assistant

If you're looking for additional tools that aren't mentioned on this page, see SQL
Command Prompt Utilities and Download SQL Server extended features and tools
Migration guide: IBM Db2 to SQL Server
on Azure VM
Article • 08/30/2022

Applies to:
SQL Server on Azure VM

This guide teaches you to migrate your user databases from IBM Db2 to SQL Server on
Azure VM, by using the SQL Server Migration Assistant for Db2.

For other migration guides, see Azure Database Migration Guides.

Prerequisites
To migrate your Db2 database to SQL Server, you need:

To verify that your source environment is supported.


SQL Server Migration Assistant (SSMA) for Db2 .
Connectivity between your source environment and your SQL Server VM in Azure.
A target SQL Server on Azure VM.

Pre-migration
After you have met the prerequisites, you're ready to discover the topology of your
environment and assess the feasibility of your migration.

Assess
Use SSMA for DB2 to review database objects and data, and assess databases for
migration.

To create an assessment, follow these steps:

1. Open SSMA for Db2 .

2. Select File > New Project.

3. Provide a project name and a location to save your project. Then select a SQL
Server migration target from the drop-down list, and select OK.
4. On Connect to Db2, enter values for the Db2 connection details.

5. Right-click the Db2 schema you want to migrate, and then choose Create report.
This will generate an HTML report. Alternatively, you can choose Create report
from the navigation bar after selecting the schema.
6. Review the HTML report to understand conversion statistics and any errors or
warnings. You can also open the report in Excel to get an inventory of Db2 objects
and the effort required to perform schema conversions. The default location for
the report is in the report folder within SSMAProjects.

For example: drive:\


<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date> .

Validate data types


Validate the default data type mappings, and change them based on requirements if
necessary. To do so, follow these steps:

1. Select Tools from the menu.

2. Select Project Settings.

3. Select the Type mappings tab.

4. You can change the type mapping for each table by selecting the table in the Db2
Metadata Explorer.

Convert schema
To convert the schema, follow these steps:

1. (Optional) Add dynamic or ad hoc queries to statements. Right-click the node, and
then choose Add statements.

2. Select Connect to SQL Server.


a. Enter connection details to connect to your instance of SQL Server on your
Azure VM.
b. Choose to connect to an existing database on the target server, or provide a
new name to create a new database on the target server.
c. Provide authentication details.
d. Select Connect.
3. Right-click the schema and then choose Convert Schema. Alternatively, you can
choose Convert Schema from the top navigation bar after selecting your schema.

4. After the conversion finishes, compare and review the structure of the schema to
identify potential problems. Address the problems based on the recommendations.

5. In the Output pane, select Review results. In the Error list pane, review errors.
6. Save the project locally for an offline schema remediation exercise. From the File
menu, select Save Project. This gives you an opportunity to evaluate the source
and target schemas offline, and perform remediation before you can publish the
schema to SQL Server on Azure VM.

Migrate
After you have completed assessing your databases and addressing any discrepancies,
the next step is to execute the migration process.

To publish your schema and migrate your data, follow these steps:

1. Publish the schema. In SQL Server Metadata Explorer, from the Databases node,
right-click the database. Then select Synchronize with Database.

2. Migrate the data. Right-click the database or object you want to migrate in Db2
Metadata Explorer, and choose Migrate data. Alternatively, you can select Migrate
Data from the navigation bar. To migrate data for an entire database, select the
check box next to the database name. To migrate data from individual tables,
expand the database, expand Tables, and then select the check box next to the
table. To omit data from individual tables, clear the check box.
3. Provide connection details for both the Db2 and SQL Server instances.

4. After migration finishes, view the Data Migration Report:

5. Connect to your instance of SQL Server on Azure VM by using SQL Server


Management Studio. Validate the migration by reviewing the data and schema.
Post-migration
After the migration is complete, you need to go through a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.

Remediate applications
After the data is migrated to the target environment, all the applications that formerly
consumed the source need to start consuming the target. Accomplishing this will in
some cases require changes to the applications.

Perform tests
Testing consists of the following activities:

1. Develop validation tests: To test database migration, you need to use SQL queries.
You must create the validation queries to run against both the source and the
target databases. Your validation queries should cover the scope you have defined.
2. Set up the test environment: The test environment should contain a copy of the
source database and the target database. Be sure to isolate the test environment.
3. Run validation tests: Run the validation tests against the source and the target,
and then analyze the results.
4. Run performance tests: Run performance tests against the source and the target,
and then analyze and compare the results.

Migration assets
For additional assistance, see the following resources, which were developed in support
of a real-world migration project engagement:

Asset Description

Data This tool provides suggested "best fit" target platforms, cloud readiness, and
workload application/database remediation level for a given workload. It offers simple, one-
assessment click calculation and report generation that helps to accelerate large estate
model and assessments by providing and automated and uniform target platform decision
tool process.
Asset Description

Db2 zOS After running the SQL script on a database, you can export the results to a file on
data assets the file system. Several file formats are supported, including *.csv, so that you can
discovery capture the results in external tools such as spreadsheets. This method can be
and useful if you want to easily share results with teams that do not have the
assessment workbench installed.
package

IBM Db2 This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables
LUW and provides a count of objects by schema and object type, a rough estimate of
inventory "raw data" in each schema, and the sizing of tables in each schema, with results
scripts and stored in a CSV format.
artifacts

IBM Db2 to The Database Compare utility is a Windows console application that you can use
SQL Server - to verify that the data is identical both on source and target platforms. You can use
Database the tool to efficiently compare data down to the row or column level in all or
Compare selected tables, rows, and columns.
utility

The Data SQL Engineering team developed these resources. This team's core charter is
to unblock and accelerate complex modernization for data platform migration projects
to Microsoft's Azure data platform.

Next steps
After migration, review the Post-migration validation and optimization guide.

For Microsoft and third-party services and tools that are available to assist you with
various database and data migration scenarios, see Data migration services and tools.

For video content, see Overview of the migration journey .


Migration guide: Oracle to SQL Server
on Azure Virtual Machines
Article • 03/24/2023

Applies to:
Azure SQL Database

This guide teaches you to migrate your Oracle schemas to SQL Server on Azure Virtual
Machines by using SQL Server Migration Assistant for Oracle.

For other migration guides, see Database Migration.

Prerequisites
To migrate your Oracle schema to SQL Server on Azure Virtual Machines, you need:

A supported source environment.


SQL Server Migration Assistant (SSMA) for Oracle .
A target SQL Server VM.
The necessary permissions for SSMA for Oracle and the provider.
Connectivity and sufficient permissions to access the source and the target.

Pre-migration
To prepare to migrate to the cloud, verify that your source environment is supported
and that you've addressed any prerequisites. Doing so will help to ensure an efficient
and successful migration.

This part of the process involves:

Conducting an inventory of the databases that you need to migrate.


Assessing those databases for potential migration problems or blockers.
Resolving any problems that you uncover.

Discover
Use MAP Toolkit to identify existing data sources and details about the features your
business is using. Doing so will give you a better understanding of the migration and
help you plan for it. This process involves scanning the network to identify your
organization's Oracle instances and the versions and features you're using.
To use MAP Toolkit to do an inventory scan, follow these steps:

1. Open MAP Toolkit .

2. Select Create/Select database:

3. Select Create an inventory database. Enter the name for the new inventory
database and a brief description, and then select OK
4. Select Collect inventory data to open the Inventory and Assessment Wizard:

5. In the Inventory and Assessment Wizard, select Oracle, and then select Next:
6. Select the computer search option that best suits your business needs and
environment, and then select Next:

7. Either enter credentials or create new credentials for the systems that you want to
explore, and then select Next:
8. Set the order of the credentials, and then select Next:

9. Enter the credentials for each computer you want to discover. You can use unique
credentials for every computer/machine, or you can use the All Computers
credential list.
10. Verify your selections, and then select Finish:

11. After the scan finishes, view the Data Collection summary. The scan might take a
few minutes, depending on the number of databases. Select Close when you're
done:
12. Select Options to generate a report about the Oracle assessment and database
details. Select both options, one at a time, to generate the report.

Assess
After you identify the data sources, use SQL Server Migration Assistant for Oracle to
assess the Oracle instances migrating to the SQL Server VM. The assistant will help you
understand the gaps between the source and destination databases. You can review
database objects and data, assess databases for migration, migrate database objects to
SQL Server, and then migrate data to SQL Server.

To create an assessment, follow these steps:

1. Open SQL Server Migration Assistant for Oracle .

2. On the File menu, select New Project.

3. Provide a project name and a location for your project, and then select a SQL
Server migration target from the list. Select OK:
4. Select Connect to Oracle. Enter values for the Oracle connection in the Connect to
Oracle dialog box:

Select the Oracle schemas that you want to migrate:


5. In Oracle Metadata Explorer, right-click the Oracle schema that you want to
migrate, and then select Create Report. Doing so will generate an HTML report. Or,
you can select the database and then select Create report in the top menu.

6. Review the HTML report for conversion statistics, errors, and warnings. Analyze it
to understand conversion problems and resolutions.
You can also open the report in Excel to get an inventory of Oracle objects and the
effort required to complete schema conversions. The default location for the report
is the report folder in SSMAProjects .

For example: drive:\


<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2016_11_12T0

2_47_55\

Validate data types


Validate the default data type mappings and change them based on requirements, if
necessary. To do so, follow these steps:

1. On the Tools menu, select Project Settings.

2. Select the Type Mappings tab.


3. You can change the type mapping for each table by selecting the table in Oracle
Metadata Explorer.

Convert the schema


To convert the schema, follow these steps:

1. (Optional) To convert dynamic or ad hoc queries, right-click the node and select
Add statement.

2. Select Connect to SQL Server in the top menu.


a. Enter connection details for your SQL Server on Azure VM.
b. Select your target database from the list, or provide a new name. If you provide
a new name, a database will be created on the target server.
c. Provide authentication details.
d. Select Connect.
3. Right-click the Oracle schema in Oracle Metadata Explorer and select Convert
Schema. Or, you can select Convert schema in the top menu:

4. After the schema conversion is complete, review the converted objects and
compare them to the original objects to identify potential problems. Use the
recommendations to address any problems:
Compare the converted Transact-SQL text to the original stored procedures and
review the recommendations:

You can save the project locally for an offline schema remediation exercise. To do
so, select Save Project on the File menu. Saving the project locally lets you
evaluate the source and target schemas offline and perform remediation before
you publish the schema to SQL Server.

5. Select Review results in the Output pane, and then review errors in the Error list
pane.

6. Save the project locally for an offline schema remediation exercise. Select Save
Project on the File menu. This gives you an opportunity to evaluate the source and
target schemas offline and perform remediation before you publish the schema to
SQL Server on Azure Virtual Machines.

Migrate
After you have the necessary prerequisites in place and have completed the tasks
associated with the pre-migration stage, you're ready to start the schema and data
migration. Migration involves two steps: publishing the schema and migrating the data.

To publish your schema and migrate the data, follow these steps:

1. Publish the schema: right-click the database in SQL Server Metadata Explorer and
select Synchronize with Database. Doing so publishes the Oracle schema to SQL
Server on Azure Virtual Machines.

Review the mapping between your source project and your target:
2. Migrate the data: right-click the database or object that you want to migrate in
Oracle Metadata Explorer and select Migrate Data. Or, you can select the Migrate
Data tab. To migrate data for an entire database, select the check box next to the
database name. To migrate data from individual tables, expand the database,
expand Tables, and then select the checkboxes next to the tables. To omit data
from individual tables, clear the checkboxes.

3. Provide connection details for Oracle and SQL Server on Azure Virtual Machines in
the dialog box.

4. After the migration finishes, view the Data Migration Report:


5. Connect to your SQL Server on Azure Virtual Machines instance by using SQL
Server Management Studio. Validate the migration by reviewing the data and
schema:

Instead of using SSMA, you could use SQL Server Integration Services (SSIS) to migrate
the data. To learn more, see:
The article SQL Server Integration Services.
The white paper SSIS for Azure and Hybrid Data Movement .

Post-migration
After you complete the migration stage, you need to complete a series of post-
migration tasks to ensure that everything is running as smoothly and efficiently as
possible.

Remediate applications
After the data is migrated to the target environment, all the applications that previously
consumed the source need to start consuming the target. Making those changes might
require changes to the applications.

Data Access Migration Toolkit is an extension for Visual Studio Code. It allows you to
analyze your Java source code and detect data access API calls and queries. The toolkit
provides a single-pane view of what needs to be addressed to support the new
database back end. To learn more, see Migrate your Java application from Oracle .

Perform tests
To test your database migration, complete these activities:

1. Develop validation tests. To test database migration, you need to use SQL queries.
Create the validation queries to run against both the source and target databases.
Your validation queries should cover the scope that you've defined.

2. Set up a test environment. The test environment should contain a copy of the
source database and the target database. Be sure to isolate the test environment.

3. Run validation tests. Run the validation tests against the source and the target,
and then analyze the results.

4. Run performance tests. Run performance test against the source and the target,
and then analyze and compare the results.

Validate migrated objects


Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to
test migrated database objects. The SSMA Tester is used to verify that converted objects
behave in the same way.
Create test case
1. Open SSMA for Oracle, select Tester followed by New Test Case.

2. On the Test Case wizard, provide the following information:

Name: Enter the name to identify the test case.

Creation date: Today's current date, defined automatically.

Last Modified date: filled in automatically, should not be changed.

Description: Enter any additional information to identify the purpose of the test
case.
3. Select the objects that are part of the test case from the Oracle object tree located
on the left side.

In this example, stored procedure ADD_REGION and table REGION are selected.

To learn more, see Selecting and configuring objects to test.

4. Next, select the tables, foreign keys and other dependent objects from the Oracle
object tree in the left window.
To learn more, see Selecting and configuring affected objects.

5. Review the evaluation sequence of objects. Change the order by selecting the
buttons in the grid.

6. Finalize the test case by reviewing the information provided in the previous steps.
Configure the test execution options based on the test scenario.
For more information on test case settings, Finishing test case preparation

7. Select Finish to create the test case.

Run test case


When SSMA Tester runs a test case, the test engine executes the objects selected for
testing and generates a verification report.

1. Select the test case from test repository and then select run.

2. Review the launch test case and select run.


3. Next, provide Oracle source credentials. Select connect after entering the
credentials.
4. Provide target SQL Server credentials and select connect.

On success, the test case moves to initialization stage.

5. A real-time progress bar shows the execution status of the test run.
6. Review the report after the test is completed. The report provides the statistics, any
errors during the test run and a detail report.
7. Select details to get more information.

Example of positive data validation.

Example of failed data validation.


Optimize
The post-migration phase is crucial for reconciling any data accuracy problems and
verifying completeness. It's also critical for addressing performance issues with the
workload.

7 Note

For more information about these problems and specific steps to mitigate them,
see the Post-migration validation and optimization guide.

Migration resources
For more help with completing this migration scenario, see the following resources,
which were developed to support a real-world migration project.

Title/Link Description

Data Workload This tool provides suggested best-fit target platforms, cloud readiness, and
Assessment application/database remediation levels for a given workload. It offers simple
Model and one-click calculation and report generation that helps to accelerate large estate
Tool assessments by providing an automated and uniform target-platform decision
process.

Oracle This asset includes a PL/SQL query that targets Oracle system tables and
Inventory Script provides a count of objects by schema type, object type, and status. It also
Artifacts provides a rough estimate of raw data in each schema and the sizing of tables
in each schema, with results stored in a CSV format.

Automate This set of resources uses a .csv file as entry (sources.csv in the project folders)
SSMA Oracle to produce the XML files that you need to run an SSMA assessment in console
Assessment mode. You provide the source.csv file by taking an inventory of existing Oracle
Collection & instances. The output files are AssessmentReportGeneration_source_1.xml,
Consolidation ServersConnectionFile.xml, and VariableValueFile.xml.

SSMA issues With Oracle, you can assign a non-scalar condition in a WHERE clause. SQL
and possible Server doesn't support this type of condition. So SSMA for Oracle doesn't
remedies when convert queries that have a non-scalar condition in the WHERE clause. Instead,
migrating it generates an error: O2SS0001. This white paper provides details on the
Oracle problem and ways to resolve it.
databases

Oracle to SQL This document focuses on the tasks associated with migrating an Oracle
Server schema to the latest version of SQL Server. If the migration requires changes to
Migration features/functionality, you need to carefully consider the possible effect of
Handbook each change on the applications that use the database.
Title/Link Description

Oracle to SQL SSMA for Oracle Tester is the recommended tool to automatically validate the
Server - database object conversion and data migration, and it's a superset of Database
Database Compare functionality.

Compare
utility If you're looking for an alternative data validation option, you can use the
Database Compare utility to compare data down to the row or column level in
all or selected tables, rows, and columns.

The Data SQL Engineering team developed these resources. This team's core charter is
to unblock and accelerate complex modernization for data-platform migration projects
to the Microsoft Azure data platform.

Next steps
To check the availability of services applicable to SQL Server, see the Azure Global
infrastructure center .

For a matrix of the Microsoft and third-party services and tools that are available
to help you with various database and data migration scenarios and specialized
tasks, see Services and tools for data migration.

To learn more about Azure SQL, see:


Deployment options
SQL Server on Azure Virtual Machines
Azure total Cost of Ownership Calculator

To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices to cost and size workloads migrated to Azure

For information about licensing, see:


Bring your own license with the Azure Hybrid Benefit
Get free extended support for SQL Server

To assess the application access layer, use Data Access Migration Toolkit Preview .

For details on how to do data access layer A/B testing, see Overview of Database
Experimentation Assistant.
Migration overview: SQL Server to SQL
Server on Azure VMs
Article • 12/26/2022

Applies to:
SQL Server on Azure VM

Learn about the different migration strategies to migrate your SQL Server to SQL Server
on Azure Virtual Machines (VMs).

You can migrate SQL Server running on-premises or on:

SQL Server on Virtual Machines


Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Relational Database Service (Amazon RDS)
Google Compute Engine

For other migration guides, see Database Migration.

Overview
Migrate to SQL Server on Azure Virtual Machines (VMs) when you want to use the
familiar SQL Server environment with OS control, and want to take advantage of cloud-
provided features such as built-in VM high availability, automated backups, and
automated patching.

Save on costs by bringing your own license with the Azure Hybrid Benefit licensing
model or extend support for SQL Server 2012 by getting free security updates.

Choose appropriate target


Azure Virtual Machines run in many different regions of Azure and also offer various
machine sizes and Storage options.
When determining the correct size of VM and
Storage for your SQL Server workload, refer to the Performance Guidelines for SQL
Server on Azure Virtual Machines..

You can use the Azure SQL migration extension for Azure Data Studio to get right-sized
SQL Server on Azure Virtual Machines recommendation. The extension collects
performance data from your source SQL Server instance to provide right-sized Azure
recommendation that meets your workload's performance needs with minimal cost. To
learn more, see Get right-sized Azure recommendation for your on-premises SQL Server
database(s)

To determine the VM size and storage requirements for all your workloads in your data
estate, it's recommended that these are sized through a Performance-Based Azure
Migrate Assessment. If this isn't an available option, see the following article on creating
your own baseline for performance .

Consideration should also be made on the correct installation and configuration of SQL
Server on a VM. It's recommended to use the Azure SQL virtual machine image gallery
as this allows you to create a SQL Server VM with the right version, edition, and
operating system. This will also register the Azure VM with the SQL Server Resource
Provider automatically, enabling features such as Automated Backups and Automated
Patching.

Migration strategies
There are two migration strategies to migrate your user databases to an instance of SQL
Server on Azure VMs:
migrate, and lift and shift.

The appropriate approach for your business typically depends on the following factors:

Size and scale of migration


Speed of migration
Application support for code change
Need to change SQL Server Version, Operating System, or both.
Supportability life cycle of your existing products
Window for application downtime during migration

The following table describes differences in the two migration strategies:

Migration Description When to use


strategy
Migration Description When to use
strategy

Lift & Use the lift and shift migration strategy to move Use for single to large-scale
shift the entire physical or virtual SQL Server from its migrations, even applicable to
current location onto an instance of SQL Server scenarios such as data center exit.

on Azure VM without any changes to the


operating system, or SQL Server version. To Minimal to no code changes
complete a lift and shift migration, see Azure required to user SQL databases or
Migrate.
applications, allowing for faster
overall migrations.

The source server remains online and services


requests while the source and destination server No extra steps required for
synchronize data allowing for an almost migrating the Business
seamless migration. Intelligence services such as SSIS,
SSRS, and SSAS.

Migrate Use a migration strategy when you want to Use when there's a requirement
upgrade the target SQL Server and/or operating or desire to migrate to SQL Server
system version. on Azure Virtual Machines, or if
there's a requirement to upgrade
Select an Azure VM from Azure Marketplace or a legacy SQL Server and/or OS
prepared SQL Server image that matches the versions that are no longer in
source SQL Server version.
support.

Use the Azure SQL migration extension for May require some application or
Azure Data Studio to assess, get user database changes to support
recommendations for right-sized Azure the SQL Server upgrade.

configuration (VM series, compute and storage)


and migrate SQL Server database(s) to SQL There may be additional
Server on Azure virtual machines with minimal considerations for migrating
downtime. Business Intelligence services if in
the scope of migration.

Lift and shift


The following table details the available method for the lift and shift migration strategy
to migrate your SQL Server database to SQL Server on Azure VMs:

Method Minimum Minimum Source Notes


source target backup
version version size
constraint
Method Minimum Minimum Source Notes
source target backup
version version size
constraint

Azure SQL SQL Azure VM Existing SQL Server to be moved as-is to


Migrate Server Server storage instance of SQL Server on an Azure VM. Can
2008 SP4 2008 SP4 limit scale migration workloads of up to 35,000 VMs.

Source server(s) remain online and servicing


requests during synchronization of server data,
minimizing downtime.

Automation & scripting: Azure Site Recovery


Scripts and Example of scaled migration and
planning for Azure

7 Note

It's now possible to lift and shift both your failover cluster instance and availability
group solution to SQL Server on Azure VMs using Azure Migrate.

Migrate
Owing to the ease of setup, the recommended migration approach is to take a native
SQL Server backup locally and then copy the file to Azure. This method supports larger
databases (>1 TB) for all versions of SQL Server starting from 2008 and larger database
backups (>1 TB). Starting with SQL Server 2014, for database smaller than 1 TB that have
good connectivity to Azure, SQL Server backup to URL is the better approach.

When migrating SQL Server databases to an instance of SQL Server on Azure VMs, it's
important to choose an approach that suits when you need to cut over to the target
server as this affects the application downtime window.

The following table details all available methods to migrate your SQL Server database to
SQL Server on Azure VMs:

Method Minimum Minimum Source Notes


source target backup
version version size
constraint
Method Minimum Minimum Source Notes
source target backup
version version size
constraint

Azure SQL migration SQL SQL Azure VM This is an easy to use wizard
extension for Azure Server Server storage based extension in Azure Data
Data Studio 2008 2008 limit Studio for migrating SQL Server
database(s) to SQL Server on
Azure virtual machines. Use
compression to minimize backup
size for transfer.

The Azure SQL migration


extension for Azure Data Studio
provides assessment, Azure
recommendation and migration
capabilities in a simple user
interface and supports minimal
downtime migrations.

Distributed availability SQL SQL Azure VM A distributed availability group is


group Server Server storage a special type of availability
2016 2016 limit group that spans two separate
availability groups. The
availability groups that
participate in a distributed
availability group don't need to
be in the same location and
include cross-domain support.

This method minimizes


downtime, use when you have an
availability group configured on-
premises.

Automation & scripting: T-SQL

Backup to a file SQL SQL Azure VM This is a simple and well-tested


Server Server storage technique for moving databases
2008 SP4 2008 SP4 limit across machines. Use
compression to minimize backup
size for transfer.

Automation & scripting:


Transact-SQL (T-SQL) and
AzCopy to Blob storage
Method Minimum Minimum Source Notes
source target backup
version version size
constraint

Backup to URL SQL SQL 12.8 TB An alternative way to move the


Server Server for SQL backup file to the VM using
2012 SP1 2012 SP1 Server Azure storage. Use compression
CU2 CU2 2016, to minimize backup size for
otherwise transfer.

1 TB
Automation & scripting: T-SQL
or maintenance plan

Database Migration SQL SQL Azure VM The DMA assesses SQL Server
Assistant (DMA) Server Server storage on-premises and then seamlessly
2005 2008 SP4 limit upgrades to later versions of SQL
Server or migrates to SQL Server
on Azure VMs, Azure SQL
Database or Azure SQL Managed
Instance.

Shouldn't be used on
FILESTREAM-enabled user
databases.

DMA also includes capability to


migrate SQL and Windows logins
and assess SSIS Packages.

Automation & scripting:


Command line interface

Detach and attach SQL SQL Azure VM Use this method when you plan
Server Server storage to store these files using Azure
2008 SP4 2014 limit Blob Storage and attach them to
an instance of SQL Server on an
Azure VM, useful with very large
databases or when the time to
backup and restore is too long.

Automation & scripting: T-SQL


and AzCopy to Blob storage
Method Minimum Minimum Source Notes
source target backup
version version size
constraint

Log shipping SQL SQL Azure VM Log shipping replicates


Server Server storage transactional log files from on-
2008 SP4 2008 SP4 limit premises on to an instance of
(Windows (Windows SQL Server on an Azure VM.

Only) Only)
This provides minimal downtime
during failover and has less
configuration overhead than
setting up an Always On
availability group.

Automation & scripting: T-SQL

Convert on-premises SQL SQL Azure VM Use when bringing your own SQL
machine to Hyper-V Server Server storage Server license, when migrating a
VHDs, upload to Azure 2005 or 2005 or limit database that you'll run on an
Blob storage, and then greater greater older version of SQL Server, or
deploy a new virtual when migrating system and user
machine using databases together as part of the
uploaded VHD migration of database
dependent on other user
databases and/or system
databases.

Ship hard drive using SQL SQL Azure VM Use the Windows Import/Export
Windows Server Server storage Service when manual copy
Import/Export Service 2005 or 2005 or limit method is too slow, such as with
greater greater very large databases

 Tip

For large data transfers with limited to no network options, see Large data
transfers with limited connectivity.
It's now possible to lift and shift both your failover cluster instance and
availability group solution to SQL Server on Azure VMs using Azure Migrate.

Considerations
The following is a list of key points to consider when reviewing migration methods:
For optimum data transfer performance, migrate databases and files onto an
instance of SQL Server on Azure VM using a compressed backup file. For larger
databases, in addition to compression, split the backup file into smaller files for
increased performance during backup and transfer.
If migrating from SQL Server 2014 or higher, consider encrypting the backups to
protect data during network transfer.
To minimize downtime during database migration, use the Azure SQL migration
extension in Azure Data Studio or Always On availability group option.
For limited to no network options, use offline migration methods such as backup
and restore, or disk transfer services available in Azure.
To also change the version of SQL Server on a SQL Server on Azure VM, see
change SQL Server edition.

Business Intelligence
There may be additional considerations when migrating SQL Server Business Intelligence
services outside the scope of database migrations.

SQL Server Integration Services


You can migrate SQL Server Integration Services (SSIS) packages and projects in SSISDB
to SQL Server on Azure VM using one of the two methods below.

Backup and restore the SSISDB from the source SQL Server instance to SQL Server
on Azure VM. This will restore your packages in the SSISDB to the Integration
Services Catalog on your target SQL Server on Azure VM.
Redeploy your SSIS packages on your target SQL Server on Azure VM using one of
the deployment options.

If you have SSIS packages deployed as package deployment model, you can convert
them before migration. See the project conversion tutorial to learn more.

SQL Server Reporting Services


To migrate your SQL Server Reporting Services (SSRS) reports to your target SQL Server
on Azure VM, see Migrate a Reporting Services Installation (Native Mode)

Alternatively, you can also migrate SSRS reports to paginated reports in Power BI. Use
the RDL Migration Tool to help prepare and migrate your reports. Microsoft
developed this tool to help customers migrate Report Definition Language (RDL) reports
from their SSRS servers to Power BI. It's available on GitHub, and it documents an end-
to-end walkthrough of the migration scenario.

SQL Server Analysis Services


SQL Server Analysis Services databases (multidimensional or tabular models) can be
migrated from your source SQL Server to SQL Server on Azure VM using one of the
following options:

Interactively using SSMS


Programmatically using Analysis Management Objects (AMO)
By script using XMLA (XML for Analysis)

See Move an Analysis Services Database to learn more.

Alternatively, you can consider migrating your on-premises Analysis Services tabular
models to Azure Analysis Services or to Power BI Premium by using the new XMLA
read/write endpoints.

Server objects
Depending on the setup in your source SQL Server, there may be additional SQL Server
features that will require manual intervention to migrate them to SQL Server on Azure
VM by generating scripts in Transact-SQL (T-SQL) using SQL Server Management Studio
and then running the scripts on the target SQL Server on Azure VM. Some of the
commonly used features are:

Logins and roles


Linked server(s)
External Data Sources
Agent jobs
Alerts
Database Mail
Replication

For a complete list of metadata and server objects that you need to move, see Manage
Metadata When Making a Database Available on Another Server.

Supported versions
As you prepare for migrating SQL Server databases to SQL Server on Azure VMs, be sure
to consider the versions of SQL Server that are supported. For a list of current supported
SQL Server versions on Azure VMs, please see SQL Server on Azure VMs.

Migration assets
For additional assistance, see the following resources that were developed for real world
migration projects.

Asset Description

Data This tool provides suggested "best fit" target platforms, cloud readiness, and
workload application/database remediation level for a given workload. It offers simple, one-
assessment select calculation and report generation that helps to accelerate large estate
model and assessments by providing and automated and uniform target platform decision
tool process.

Perfmon A tool that collects Perform data to understand baseline performance that helps
data the migration target recommendation. This tool that uses logman.exe to create
collection the command that will create, start, stop, and delete performance counters set on
automation a remote SQL Server.
using
Logman

Multiple- This whitepaper outlines the steps to set up multiple Azure virtual machines in a
SQL-VM- SQL Server Always On Availability Group configuration.
VNet-ILB

Azure virtual These PowerShell scripts provide a programmatic option to retrieve the list of
machines regions that support Azure virtual machines supporting Ultra SSDs.
supporting
Ultra SSD
per
Region

The Data SQL Engineering team developed these resources. This team's core charter is
to unblock and accelerate complex modernization for data platform migration projects
to Microsoft's Azure data platform.

Next steps
To start migrating your SQL Server databases to SQL Server on Azure VMs, see the
Individual database migration guide.

For a matrix of the Microsoft and third-party services and tools that are available to
assist you with various database and data migration scenarios as well as specialty tasks,
see the article Service and tools for data migration.
To learn more about Azure SQL see:

Deployment options
SQL Server on Azure VMs
Azure total Cost of Ownership Calculator

To learn more about the framework and adoption cycle for Cloud migrations, see:

Cloud Adoption Framework for Azure


Best practices for costing and sizing workloads migrate to Azure

For information about licensing, see:

Bring your own license with the Azure Hybrid Benefit


Get free extended support for SQL Server
To assess the Application access layer, see Data Access Migration Toolkit
(Preview)
For details on how to perform Data Access Layer A/B testing see Database
Experimentation Assistant.
Migration guide: SQL Server to SQL
Server on Azure Virtual Machines
Article • 03/28/2023

Applies to:
SQL Server on Azure VM

In this guide, you learn how to discover, assess, and migrate your user databases from
SQL Server to an instance of SQL Server on Azure Virtual Machines by tools and
techniques based on your requirements.

You can migrate SQL Server running on-premises or on:

SQL Server on virtual machines (VMs).


Amazon Web Services (AWS) EC2.
Amazon Relational Database Service (AWS RDS).
Compute Engine (Google Cloud Platform [GCP]).

For information about extra migration strategies, see the SQL Server VM migration
overview. For other migration guides, see Azure Database Migration Guides.

Prerequisites
Migrating to SQL Server on Azure Virtual Machines requires the following resources:

Azure SQL migration extension for Azure Data Studio.


An Azure Migrate project (only required for SQL Server discovery in your data
estate).
A prepared target SQL Server on Azure Virtual Machines instance that's the same
or greater version than the SQL Server source.
Connectivity between Azure and on-premises.
Choosing an appropriate migration strategy.

Pre-migration
Before you begin your migration, you need to discover the topology of your SQL
environment and assess the feasibility of your intended migration.

Discover
Azure Migrate assesses migration suitability of on-premises computers, performs
performance-based sizing, and provides cost estimations for running on-premises. To
plan for the migration, use Azure Migrate to identify existing data sources and details
about the features your SQL Server instances use. This process involves scanning the
network to identify all of your SQL Server instances in your organization with the version
and features in use.

) Important

When you choose a target Azure virtual machine for your SQL Server instance, be
sure to consider the Performance guidelines for SQL Server on Azure Virtual
Machines.

For more discovery tools, see the services and tools available for data migration
scenarios.

Assess
When migrating from SQL Server on-premises to SQL Server on Azure Virtual Machines,
it is unlikely that you'll have any compatibility or feature parity issues if the source and
target SQL Server versions are the same. If you're not upgrading the version of SQL
Server, skip this step and move to the Migrate section.

Before migration, it's still a good practice to run an assessment of your SQL Server
databases to identify migration blockers (if any) and the Azure SQL migration extension
for Azure Data Studio does that before migration.

7 Note

If you are assessing the entire SQL Server data estate at scale on VMware, use
Azure Migrate to get Azure SQL deployment recommendations, target sizing, and
monthly estimates.

Assess user databases


The Azure SQL migration extension for Azure Data Studio provides a seamless wizard
based experience to assess, get Azure recommendations and migrate your SQL Server
databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting
any migration blockers or warnings, the extension also includes an option for Azure
recommendations to collect your databases' performance data to recommend a right-
sized SQL Server on Azure Virtual Machines to meet the performance needs of your
workload (with the least price).

To learn more about Azure recommendations, see Get right-sized Azure


recommendation for your on-premises SQL Server database(s).

) Important

To assess databases using the Azure SQL migration extension, ensure that the
logins used to connect the source SQL Server are members of the sysadmin server
role or have CONTROL SERVER permission.

For a version upgrade, use Data Migration Assistant to assess on-premises SQL Server
instances if you are upgrading to an instance of SQL Server on Azure Virtual Machines
with a higher version to understand the gaps between the source and target versions.

Assess the applications


Typically, an application layer accesses user databases to persist and modify data. Data
Migration Assistant can assess the data access layer of an application in two ways:

By using captured extended events or SQL Server Profiler traces of your user
databases. You can also use the Database Experimentation Assistant to create a
trace log that can also be used for A/B testing.
By using the Data Access Migration Toolkit (preview) , which provides discovery
and assessment of SQL queries within the code and is used to migrate application
source code from one database platform to another. This tool supports popular file
types like C#, Java, XML, and plain text. For a guide on how to perform a Data
Access Migration Toolkit assessment, see the Use Data Migration Assistant blog
post.
During the assessment of user databases, use Data Migration Assistant to import
captured trace files or Data Access Migration Toolkit files.

Assessments at scale

If you have multiple servers that require Azure readiness assessment, you can automate
the process by using scripts using one of the following options. To learn more about
using scripting see Migrate databases at scale using automation.

Az.DataMigration PowerShell module


az datamigration CLI extension
Data Migration Assistant command-line interface

For summary reporting across large estates, Data Migration Assistant assessments can
also be consolidated into Azure Migrate.

Upgrade databases with Data Migration Assistant

For upgrade scenario, you might have a series of recommendations to ensure your user
databases perform and function correctly after upgrade. Data Migration Assistant
provides details on the impacted objects and resources for how to resolve each issue.
Make sure to resolve all breaking changes and behavior changes before you start
production upgrade.

For deprecated features, you can choose to run your user databases in their original
compatibility mode if you want to avoid making these changes and speed up migration.
This action will prevent upgrading your database compatibility until the deprecated
items have been resolved.

U Caution

Not all SQL Server versions support all compatibility modes. Check that your target
SQL Server version supports your chosen database compatibility. For example, SQL
Server 2019 doesn't support databases with level 90 compatibility (which is SQL
Server 2005). These databases would require, at least, an upgrade to compatibility
level 100.

Migrate
After you've completed the pre-migration steps, you're ready to migrate the user
databases and components. Migrate your databases by using your preferred migration
method.

The following sections provide options for performing a migration in order of


preference:

migrate using the Azure SQL migration extension for Azure Data Studio with
minimal downtime
backup and restore
detach and attach from a URL
convert to a VM, upload to a URL, and deploy as a new VM
log shipping
ship a hard drive
migrate objects outside user databases

Migrate using the Azure SQL migration extension for


Azure Data Studio (minimal downtime)
To perform a minimal downtime migration using Azure Data Studio, follow the high level
steps below. For a detailed step-by-step tutorial, see Migrate SQL Server to SQL Server
on Azure Virtual Machine online using Azure Data Studio:

1. Download and install Azure Data Studio and the Azure SQL migration extension.
2. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio.
3. Select databases for assessment and view migration readiness or issues (if any).
Additionally, collect performance data and get right-sized Azure recommendation.
4. Select your Azure account and your target SQL Server on Azure Machine from your
subscription.
5. Select the location of your database backups. Your database backups can either be
located on an on-premises network share or in an Azure Blob Storage container.
6. Create a new Azure Database Migration Service using the wizard in Azure Data
Studio. If you have previously created an Azure Database Migration Service using
Azure Data Studio, you can reuse the same if desired.
7. Optional: If your backups are on an on-premises network share, download and
install self-hosted integration runtime on a machine that can connect to source
SQL Server and the location containing the backup files.
8. Start the database migration and monitor the progress in Azure Data Studio. You
can also monitor the progress under the Azure Database Migration Service
resource in Azure portal.
9. Complete the cutover.
a. Stop all incoming transactions to the source database.
b. Make application configuration changes to point to the target database in SQL
Server on Azure Virtual Machine.
c. Take any tail log backups for the source database in the backup location
specified.
d. Ensure all database backups have the status Restored in the monitoring details
page.
e. Select Complete cutover in the monitoring details page.

Backup and restore


To perform a standard migration by using backup and restore:

1. Set up connectivity to SQL Server on Azure Virtual Machines based on your


requirements. For more information, see Connect to a SQL Server virtual machine
on Azure (Resource Manager).
2. Pause or stop any applications that are using databases intended for migration.
3. Ensure user databases are inactive by using single user mode.
4. Perform a full database backup to an on-premises location.
5. Copy your on-premises backup files to your VM by using a remote desktop, Azure
Data Explorer, or the AzCopy command-line utility. (Greater than 2-TB backups are
recommended.)
6. Restore full database backups to the SQL Server on Azure Virtual Machines.

Detach and attach from a URL


Detach your database and log files and transfer them to Azure Blob storage. Then attach
the database from the URL on your Azure VM. Use this method if you want the physical
database files to reside in Blob storage, which might be useful for very large databases.
Use the following general steps to migrate a user database using this manual method:

1. Detach the database files from the on-premises database instance.


2. Copy the detached database files into Azure Blob storage using the AZCopy
command-line utility.
3. Attach the database files from the Azure URL to the SQL Server instance in the
Azure VM.

Convert to a VM, upload to a URL, and deploy as a new


VM
Use this method to migrate all system and user databases in an on-premises SQL Server
instance to an Azure virtual machine. Use the following general steps to migrate an
entire SQL Server instance using this manual method:

1. Convert physical or virtual machines to Hyper-V VHDs.


2. Upload VHD files to Azure Storage by using the Add-AzureVHD cmdlet.
3. Deploy a new virtual machine by using the uploaded VHD.

7 Note

To migrate an entire application, consider using Azure Site Recovery.

Log shipping
Log shipping replicates transactional log files from on-premises on to an instance of
SQL Server on an Azure VM. This option provides minimal downtime during failover and
has less configuration overhead than setting up an Always On availability group.

For more information, see Log Shipping Tables and Stored Procedures.

Ship a hard drive


Use the Windows Import/Export Service method to transfer large amounts of file data to
Azure Blob storage in situations where uploading over the network is prohibitively
expensive or not feasible. With this service, you send one or more hard drives containing
that data to an Azure data center where your data will be uploaded to your storage
account.

Migrate objects outside user databases


More SQL Server objects might be required for the seamless operation of your user
databases post migration.

The following table provides a list of components and recommended migration


methods that can be completed before or after migration of your user databases.

Feature Component Migration methods

Databases Model Script with SQL Server Management Studio.

The tempdb Plan to move tempdb onto Azure VM temporary disk (SSD)) for
database best performance. Be sure to pick a VM size that has a sufficient
local SSD to accommodate your tempdb .
Feature Component Migration methods

User Use the Backup and restore methods for migration. Data
databases Migration Assistant doesn't support databases with FileStream.
with
FileStream

Security SQL Server Use Data Migration Assistant to migrate user logins.
and Windows
logins

SQL Server Script with SQL Server Management Studio.


roles

Cryptographic Recommend converting to use Azure Key Vault. This procedure


providers uses the SQL VM resource provider.

Server Backup Replace with database backup by using Azure Backup, or write
objects devices backups to Azure Storage (SQL Server 2012 SP1 CU2 +). This
procedure uses the SQL VM resource provider.

Linked servers Script with SQL Server Management Studio.

Server Script with SQL Server Management Studio.


triggers

Replication Local Script with SQL Server Management Studio.


publications

Local Script with SQL Server Management Studio.


subscribers

PolyBase PolyBase Script with SQL Server Management Studio.

Management Database mail Script with SQL Server Management Studio.

SQL Server Jobs Script with SQL Server Management Studio.


Agent

Alerts Script with SQL Server Management Studio.

Operators Script with SQL Server Management Studio.

Proxies Script with SQL Server Management Studio.

Operating Files, file Make a note of any other files or file shares that are used by
system shares your SQL servers and replicate on the Azure Virtual Machines
target.

Post-migration
After you successfully complete the migration stage, you need to complete a series of
post-migration tasks to ensure that everything is functioning as smoothly and efficiently
as possible.

Remediate applications
After the data is migrated to the target environment, all the applications that formerly
consumed the source need to start consuming the target. Accomplishing this task might
require changes to the applications in some cases.

Apply any fixes recommended by Data Migration Assistant to user databases. You need
to script these fixes to ensure consistency and allow for automation.

Perform tests
The test approach to database migration consists of the following activities:

1. Develop validation tests: To test the database migration, you need to use SQL
queries. Create validation queries to run against both the source and target
databases. Your validation queries should cover the scope you've defined.
2. Set up a test environment: The test environment should contain a copy of the
source database and the target database. Be sure to isolate the test environment.
3. Run validation tests: Run validation tests against the source and the target, and
then analyze the results.
4. Run performance tests: Run performance tests against the source and target, and
then analyze and compare the results.

 Tip

Use the Database Experimentation Assistant to assist with evaluating the target
SQL Server performance.

Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying
completeness, and addressing potential performance issues with the workload.

For more information about these issues and the steps to mitigate them, see:

Post-migration validation and optimization guide


Tuning performance in Azure SQL virtual machines
Azure cost optimization center

Next steps
To check the availability of services that apply to SQL Server, see the Azure global
infrastructure center .

For a matrix of Microsoft and third-party services and tools that are available to assist
you with various database and data migration scenarios and specialty tasks, see Services
and tools for data migration.

To learn more about Azure SQL, see:

Deployment options
SQL Server on Azure Virtual Machines
Azure Total Cost of Ownership (TCO) Calculator

To learn more about the framework and adoption cycle for cloud migrations, see:

Cloud Adoption Framework for Azure


Best practices for costing and sizing workloads for migration to Azure

For information about licensing, see:

Bring your own license with the Azure Hybrid Benefit


Get free extended support for SQL Server

To assess the application access layer, see Data Access Migration Toolkit (preview) .

For information about how to perform A/B testing for the data access layer, see
Overview of Database Experimentation Assistant.
Migrate an availability group to SQL
Server on Azure VM
Article • 10/27/2022

This article teaches you to migrate your SQL Server Always On availability group to SQL
Server on Azure VMs using the Azure Migrate: Server Migration tool. Using the
migration tool, you will be able to migrate each replica in the availability group to an
Azure VM hosting SQL Server, as well as the cluster metadata, availability group
metadata and other necessary high availability components.

In this article, you learn how to:

" Prepare Azure and source environment for migration.


" Start replicating servers.
" Monitor replication.
" Run a full server migration.
" Reconfigure Always On availability group.

This guide uses the agent-based migration approach of Azure Migrate, which treats any
server or virtual machine as a physical server. When migrating physical machines, Azure
Migrate: Server Migration uses the same replication architecture as the agent-based
disaster recovery in the Azure Site Recovery service, and some components share the
same code base. Some content might link to Site Recovery documentation.

Prerequisites
Before you begin this tutorial, you should complete the following prerequisites:

1. An Azure subscription. Create a free account , if necessary.


2. Install the Azure PowerShell Az module.
3. Download the PowerShell samples scripts from the GitHub repository.

Prepare Azure
Prepare Azure for migration with the Server Migration tool.

Task Details
Task Details

Create an Your Azure account needs Contributor or Owner permissions to create a new
Azure project.
Migrate
project

Verify Your Azure account needs Contributor or Owner permissions on the Azure
permissions subscription, permissions to register Azure Active Directory (Azure AD) apps, and
for your User Access Administrator permissions on the Azure subscription to create a Key
Azure Vault, to create a VM, and to write to an Azure managed disk.
account

Set up an Setup an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are
Azure created and joined to the Azure VNet that you specify when you set up migration.
virtual
network

To check you have proper permissions, follow these steps:

1. In the Azure portal, open the subscription, and select Access control (IAM).
2. In Check access, find the relevant account, and select it to view permissions.
3. You should have Contributor or Owner permissions.

If you just created a free Azure account, you're the owner of your
subscription.
If you're not the subscription owner, work with the owner to assign the role.

If you need to assign permissions, follow the steps in Prepare for an Azure user account.

Prepare for migration


To prepare for server migration, verify the physical server settings, and prepare to
deploy a replication appliance.

Check machine requirements


Ensure source machines comply with requirements to migrate to Azure. Follow these
steps:

1. Verify server requirements.


2. Verify that source machines that you replicate to Azure comply with Azure VM
requirements.
3. Some Windows sources require a few additional changes. Migrating the source
before making these changes could prevent the VM from booting in Azure. For
some operating systems, Azure Migrate makes these changes automatically.

Prepare for replication


Azure Migrate: Server Migration uses a replication appliance to replicate machines to
Azure. The replication appliance runs the following components:

Configuration server: The configuration server coordinates communications


between on-premises and Azure, and manages data replication.
Process server: The process server acts as a replication gateway. It receives
replication data; optimizes it with caching, compression, and encryption, and sends
it to a cache storage account in Azure.

Prepare for appliance deployment as follows:

Create a Windows Server 2016 machine to host the replication appliance. Review
the machine requirements.
The replication appliance uses MySQL. Review the options for installing MySQL on
the appliance.
Review the Azure URLs required for the replication appliance to access public and
government clouds.
Review port access requirements for the replication appliance.

7 Note

The replication appliance should be installed on a machine other than the source
machine you are replicating or migrating, and not on any machine that has had the
Azure Migrate discovery and assessment appliance installed before.

Download replication appliance installer


To download the replication appliance installer, follow these steps:

1. In the Azure Migrate project > Servers, in Azure Migrate: Server Migration, select
Discover.
2. In Discover machines > Are your machines virtualized?, select Physical or other
(AWS, GCP, Xen, etc.).

3. In Target region, select the Azure region to which you want to migrate the
machines.

4. Select Confirm that the target region for migration is region-name.

5. Select Create resources. This creates an Azure Site Recovery vault in the
background.
If you've already set up migration with Azure Migrate: Server Migration, the
target option can't be configured, since resources were set up previously.
You can't change the target region for this project after selecting this button.
All subsequent migrations are to this region.

6. In Do you want to install a new replication appliance?, select Install a replication


appliance.

7. In Download and install the replication appliance software, download the


appliance installer, and the registration key. You need to the key in order to
register the appliance. The key is valid for five days after it's downloaded.

8. Copy the appliance setup file and key file to the Windows Server 2016 machine
you created for the appliance.

9. After the installation completes, the Appliance configuration wizard will launch
automatically (You can also launch the wizard manually by using the cspsconfigtool
shortcut that is created on the desktop of the appliance machine). Use the Manage
Accounts tab of the wizard to create a dummy account with the following details:

"guest" as the friendly name


"username" as the username
"password" as the password for the account.

You will use this dummy account in the Enable Replication stage.

10. After setup completes, and the appliance restarts, in Discover machines, select the
new appliance in Select Configuration Server, and select Finalize registration.
Finalize registration performs a couple of final tasks to prepare the replication
appliance.

Install Mobility service


Install the Mobility service agent on the servers you want to migrate. The agent installers
are available on the replication appliance. Find the right installer, and install the agent
on each machine you want to migrate.

To install the Mobility service, follow these steps:

1. Sign in to the replication appliance.

2. Navigate to %ProgramData%\ASR\home\svsystems\pushinstallsvc\repository .

3. Find the installer for the machine operating system and version. Review supported
operating systems.

4. Copy the installer file to the machine you want to migrate.

5. Make sure that you have the passphrase that was generated when you deployed
the appliance.

Store the file in a temporary text file on the machine.


You can obtain the passphrase on the replication appliance. From the
command line, run
C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v to view the

current passphrase.
Don't regenerate the passphrase. This will break connectivity and you will
have to reregister the replication appliance.
In the /Platform parameter, specify VMware for both VMware machines and
physical machines.

6. Connect to the machine and extract the contents of the installer file to a local
folder (such as c:\temp). Run this in an admin command prompt:

Windows Command Prompt


ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe

MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted

cd C:\Temp\Extracted

7. Run the Mobility Service Installer:

Windows Command Prompt

UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent

8. Register the agent with the replication appliance:

Windows Command Prompt

cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent

UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP


address> /PassphraseFilePath <Passphrase File Path>

It may take some time after installation for discovered machines to appear in Azure
Migrate: Server Migration. As VMs are discovered, the Discovered servers count rises.

Prepare source machines


To prepare source machines, run the Get-ClusterInfo.ps1 script on a cluster node to
retrieve information on the cluster resources. The script will output the role name,
resource name, IP, and probe port in the Cluster-Config.csv file.

PowerShell

./Get-ClusterInfo.ps1

Create load balancer


For the cluster and cluster roles to respond properly to requests, an Azure Load balancer
is required. Without a load balancer, the other VMs are unable to reach the cluster IP
address as it's not recognized as belonging to the network or the cluster.

To create the load balancer, follow these steps:

1. Fill out the columns in the Cluster-Config.csv file:

Column Description
header

NewIP Specify the IP address in the Azure virtual network (or subnet) for each resource in
the CSV file.

ServicePort Specify the service port to be used by each resource in the CSV file. For the SQL
clustered resource, use the same value for service port as the probe port in the CSV.
For other cluster roles, the default values used are 1433 but you can continue to use
the port numbers that are configured in your current setup.

1. Run the Create-ClusterLoadBalancer.ps1 script to create the load balancer using


the following parameters:

Parameter Type Description

ConfigFilePath Mandatory Specify the path for the Cluster-Config.csv file that
you have filled out in the previous step.

ResourceGroupName Mandatory Specify the name of the resource group in which the
load balancer is to be created.

VNetName Mandatory Specify the name of the Azure virtual network that the
load balancer will be associated to.

SubnetName Mandatory Specify the name of the subnet in the Azure virtual
network that the load balancer will be associated to.

VNetResourceGroupName Mandatory Specify the name of the resource group for the Azure
virtual network that the load balancer will be
associated to.

Location Mandatory Specify the location in which the load balancer should
be created.

LoadBalancerName Mandatory Specify the name of the load balancer to be created.

PowerShell
./Create-ClusterLoadBalancer.ps1 -ConfigFilePath ./clsuterinfo.csv -
ResourceGroupName $resoucegroupname -VNetName $vnetname -subnetName
$subnetname -VnetResourceGroupName $vnetresourcegroupname -Location "eastus"
-LoadBalancerName $loadbalancername

Replicate machines
Now, select machines for migration. You can replicate up to 10 machines together. If
you need to replicate more, then replicate them simultaneously in batches of 10.

To replicate machines, follow these steps:

1. In the Azure Migrate project > Servers, Azure Migrate: Server Migration, select
Replicate.

2. In Replicate, > Source settings > Are your machines virtualized?, select Physical
or other (AWS, GCP, Xen, etc.).
3. In On-premises appliance, select the name of the Azure Migrate appliance that
you set up.

4. In Process Server, select the name of the replication appliance.

5. In Guest credentials, select the dummy account created previously during the
replication installer setup previously in this article. Then select Next: Virtual
machines.

6. In Virtual Machines, in Import migration settings from an assessment?, leave the


default setting No, I'll specify the migration settings manually.

7. Check each VM you want to migrate. Then select Next: Target settings.
8. In Target settings, select the subscription, and target region to which you'll
migrate, and specify the resource group in which the Azure VMs will reside after
migration.

9. In Virtual Network, select the Azure VNet/subnet to which the Azure VMs will be
joined after migration.

10. In Availability options, select:

Availability Zone to pin the migrated machine to a specific Availability Zone in


the region. Use this option to distribute servers that form a multi-node
application tier across Availability Zones. If you select this option, you'll need
to specify the Availability Zone to use for each of the selected machines in
the Compute tab. This option is only available if the target region selected for
the migration supports Availability Zones.
Availability Set to place the migrated machine in an Availability Set. The
target resource group that was selected must have one or more availability
sets in order to use this option.
No infrastructure redundancy required option if you don't need either of
these availability configurations for the migrated machines.

11. In Disk encryption type, select:

Encryption-at-rest with platform-managed key


Encryption-at-rest with customer-managed key
Double encryption with platform-managed and customer-managed keys

7 Note

To replicate VMs with CMK, you'll need to create a disk encryption set under
the target Resource Group. A disk encryption set object maps Managed Disks
to a Key Vault that contains the CMK to use for SSE.

12. In Azure Hybrid Benefit:

Select No if you don't want to apply Azure Hybrid Benefit. Then select Next.
Select Yes if you have Windows Server machines that are covered with active
Software Assurance or Windows Server subscriptions, and you want to apply
the benefit to the machines you're migrating. Then select Next.

13. In Compute, review the VM name, size, OS disk type, and availability configuration
(if selected in the previous step). VMs must conform with Azure requirements.

VM size: If you're using assessment recommendations, the VM size


dropdown shows the recommended size. Otherwise Azure Migrate picks a
size based on the closest match in the Azure subscription. Alternatively, pick a
manual size in Azure VM size.
OS disk: Specify the OS (boot) disk for the VM. The OS disk is the disk that
has the operating system bootloader and installer.
Availability Zone: Specify the Availability Zone to use.
Availability Set: Specify the Availability Set to use.

14. In Disks, specify whether the VM disks should be replicated to Azure, and select
the disk type (standard SSD/HDD or premium managed disks) in Azure. Then
select Next.

15. In Review and start replication, review the settings, and select Replicate to start
the initial replication for the servers.

7 Note
You can update replication settings any time before replication starts, Manage >
Replicating machines. Settings can't be changed after replication starts.

Track and monitor


Replication proceeds in the following sequence:

When you select Replicate, a Start Replication job begins.


When the Start Replication job finishes successfully, the machines begin their initial
replication to Azure.
After initial replication finishes, delta replication begins. Incremental changes to
on-premises disks are periodically replicated to the replica disks in Azure.

You can track job status in the portal notifications.

You can monitor replication status by selecting on Replicating servers in Azure Migrate:
Server Migration.

Migrate VMs
After machines are replicated, they are ready for migration. To migrate your servers,
follow these steps:

1. In the Azure Migrate project > Servers > Azure Migrate: Server Migration, select
Replicating servers.
2. To ensure the migrated server is synchronized with the source server, stop the SQL
Server service on every replica in the availability group, starting with secondary
replicas (in SQL Server Configuration Manager > Services) while ensuring the
disks hosting SQL data are online.

3. In Replicating machines > select server name > Overview, ensure that the last
synchronized timestamp is after you have stopped the SQL Server service on the
servers to be migrated before you move onto the next step. This should only take
a few minutes.

4. In Replicating machines, right-click the VM > Migrate.

5. In Migrate > Shut down virtual machines and perform a planned migration with
no data loss, select No > OK.

7 Note

For physical server migration, shut down of source machine is not supported
automatically. The recommendation is to bring the application down as part
of the migration window (don't let the applications accept any connections)
and then initiate the migration (the server needs to be kept running, so
remaining changes can be synchronized) before the migration is completed.
6. A migration job starts for the VM. Track the job in Azure notifications.

7. After the job finishes, you can view and manage the VM from the Virtual Machines
page.

Reconfigure cluster
After your VMs have migrated, reconfigure the cluster. Follow these steps:

1. Shut down the migrated servers in Azure.

2. Add the migrated machines to the backend pool of the load balancer. Navigate to
Load Balancer > Backend pools.

3. Select the backend pool, and add the migrated machines.

4. Start the migrated servers in Azure and sign in to any node.

5. Copy the ClusterConfig.csv file and run the Update-ClusterConfig.ps1 script


passing the CSV as a parameter. This ensures the cluster resources are updated
with the new configuration for the cluster to work in Azure.

PowerShell

./Update-ClusterConfig.ps1 -ConfigFilePath $filepath

Your Always On availability group is ready.

Complete the migration


1. After the migration is done, right-click the VM > Stop migration. This does the
following:

Stops replication for the on-premises machine.


Removes the machine from the Replicating servers count in Azure Migrate:
Server Migration.
Cleans up replication state information for the machine.

2. Install the Azure VM Windows agent on the migrated machines.


3. Perform any post-migration app tweaks, such as updating database connection
strings, and web server configurations.
4. Perform final application and migration acceptance testing on the migrated
application now running in Azure.
5. Cut over traffic to the migrated Azure VM instance.
6. Remove the on-premises VMs from your local VM inventory.
7. Remove the on-premises VMs from local backups.
8. Update any internal documentation to show the new location and IP address of
the Azure VMs.

Post-migration best practices


For SQL Server:
Install SQL Server IaaS Agent extension to automate management and
administration tasks.
Optimize SQL Server performance on Azure VMs.
Understand pricing for SQL Server on Azure.
For increased resilience:
Keep data secure by backing up Azure VMs using the Azure Backup service.
Keep workloads running and continuously available by replicating Azure VMs to
a secondary region with Site Recovery.
For increased security:
Lock down and limit inbound traffic access with Microsoft Defender for Cloud -
Just in time administration.
Restrict network traffic to management endpoints with Network Security
Groups.
Deploy Azure Disk Encryption to help secure disks, and keep data safe from
theft and unauthorized access.
Read more about securing IaaS resources , and visit the Microsoft Defender
for Cloud .
For monitoring and management:
Consider deploying Azure Cost Management to monitor resource usage and
spending.

Next steps
Investigate the cloud migration journey in the Azure Cloud Adoption Framework.
Migrate failover cluster instance to SQL
Server on Azure VMs
Article • 03/27/2023

This article teaches you to migrate your Always On failover cluster instance (FCI) to SQL
Server on Azure VMs using the Azure Migrate: Server Migration tool. Using the
migration tool, you will be able to migrate each node in the failover cluster instance to
an Azure VM hosting SQL Server, as well as the cluster and FCI metadata.

In this article, you learn how to:

" Prepare Azure and source environment for migration.


" Start replicating VMs.
" Monitor replication.
" Run a full VM migration.
" Reconfigure SQL failover cluster with Azure shared disks.

This guide uses the agent-based migration approach of Azure Migrate, which treats any
server or virtual machine as a physical server. When migrating physical machines, Azure
Migrate: Server Migration uses the same replication architecture as the agent-based
disaster recovery in the Azure Site Recovery service, and some components share the
same code base. Some content might link to Site Recovery documentation.

Prerequisites
Before you begin this tutorial, you should:

1. An Azure subscription. Create a free account , if necessary.


2. Install the Azure PowerShell Az module.
3. Download the PowerShell samples scripts from the GitHub repository.

Prepare Azure
Prepare Azure for migration with Server Migration.

Task Details

Create an Your Azure account needs Contributor or Owner permissions to create a new
Azure project.
Migrate
project
Task Details

Verify Your Azure account needs Contributor or Owner permissions on the Azure
permissions subscription, permissions to register Azure Active Directory (Azure AD) apps, and
for your User Access Administrator permissions on the Azure subscription to create a Key
Azure Vault, to create a VM, and to write to an Azure managed disk.
account

Set up an Setup an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are
Azure created and joined to the Azure VNet that you specify when you set up migration.
virtual
network

To check you have proper permissions, follow these steps:

1. In the Azure portal, open the subscription, and select Access control (IAM).
2. In Check access, find the relevant account, and select it to view permissions.
3. You should have Contributor or Owner permissions.

If you just created a free Azure account, you're the owner of your
subscription.
If you're not the subscription owner, work with the owner to assign the role.

If you need to assign permissions, follow the steps in Prepare for an Azure user account.

Prepare for migration


To prepare for server migration, you need to verify the server settings, and prepare to
deploy a replication appliance.

Check machine requirements


Make sure machines comply with requirements for migration to Azure.

1. Verify server requirements.


2. Verify that source machines that you replicate to Azure comply with Azure VM
requirements.
3. Some Windows sources require a few additional changes. Migrating the source
before making these changes could prevent the VM from booting in Azure. For
some operating systems, Azure Migrate makes these changes automatically.

Prepare for replication


Azure Migrate: Server Migration uses a replication appliance to replicate machines to
Azure. The replication appliance runs the following components:

Configuration server: The configuration server coordinates communications


between on-premises and Azure, and manages data replication.
Process server: The process server acts as a replication gateway. It receives
replication data; optimizes it with caching, compression, and encryption, and sends
it to a cache storage account in Azure.

Prepare for appliance deployment as follows:

Create a Windows Server 2016 machine to host the replication appliance. Review
the machine requirements.
The replication appliance uses MySQL. Review the options for installing MySQL on
the appliance.
Review the Azure URLs required for the replication appliance to access public and
government clouds.
Review port access requirements for the replication appliance.

7 Note

The replication appliance should be installed on a machine other than the source
machine you are replicating or migrating, and not on any machine that has had the
Azure Migrate discovery and assessment appliance installed to before.

Download replication appliance installer


To download the replication appliance installer, follow these steps:

1. In the Azure Migrate project > Servers, in Azure Migrate: Server Migration, select
Discover.
2. In Discover machines > Are your machines virtualized?, select Physical or other
(AWS, GCP, Xen, etc.).

3. In Target region, select the Azure region to which you want to migrate the
machines.

4. Select Confirm that the target region for migration is region-name.

5. Select Create resources. This creates an Azure Site Recovery vault in the
background.
If you've already set up migration with Azure Migrate Server Migration, the
target option can't be configured, since resources were set up previously.
You can't change the target region for this project after selecting this button.
All subsequent migrations are to this region.

6. In Do you want to install a new replication appliance?, select Install a replication


appliance.

7. In Download and install the replication appliance software, download the


appliance installer, and the registration key. You need to the key in order to
register the appliance. The key is valid for five days after it's downloaded.

8. Copy the appliance setup file and key file to the Windows Server 2016 machine
you created for the appliance.

9. After the installation completes, the Appliance configuration wizard will launch
automatically (You can also launch the wizard manually by using the cspsconfigtool
shortcut that is created on the desktop of the appliance machine). Use the Manage
Accounts tab of the wizard to create a dummy account with the following details:

"guest" as the friendly name


"username" as the username
"password" as the password for the account.

You will use this dummy account in the Enable Replication stage.

10. After setup completes, and the appliance restarts, in Discover machines, select the
new appliance in Select Configuration Server, and select Finalize registration.
Finalize registration performs a couple of final tasks to prepare the replication
appliance.

Install the Mobility service


Install the Mobility service agent on the servers you want to migrate. The agent installers
are available on the replication appliance. Find the right installer, and install the agent
on each machine you want to migrate.

To install the Mobility service, follow these steps:

1. Sign in to the replication appliance.

2. Navigate to %ProgramData%\ASR\home\svsystems\pushinstallsvc\repository .

3. Find the installer for the machine operating system and version. Review supported
operating systems.

4. Copy the installer file to the machine you want to migrate.

5. Make sure that you have the passphrase that was generated when you deployed
the appliance.

Store the file in a temporary text file on the machine.


You can obtain the passphrase on the replication appliance. From the
command line, run
C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v to view the

current passphrase.
Don't regenerate the passphrase. This will break connectivity and you will
have to reregister the replication appliance.
In the /Platform parameter, specify VMware for both VMware machines and
physical machines.

6. Connect to the machine and extract the contents of the installer file to a local
folder (such as c:\temp). Run this in an admin command prompt:

Windows Command Prompt


ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe

MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted

cd C:\Temp\Extracted

7. Run the Mobility Service Installer:

Windows Command Prompt

UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent

8. Register the agent with the replication appliance:

Windows Command Prompt

cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent

UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP


address> /PassphraseFilePath <Passphrase File Path>

It may take some time after installation for discovered machines to appear in Azure
Migrate: Server Migration. As VMs are discovered, the Discovered servers count rises.

Prepare source machines


To prepare source machines, you'll need information from the cluster.

U Caution

Maintain disk ownership throughout the replication process until the final
cutover. If there is a change in disk ownership, there is a chance that the
volumes could be corrupted and replication would need to be to retriggered.
Set the preferred owner for each disk to avoid transfer of ownership during
the replication process.

Avoid patching activities and system reboots during the replication process to
avoid transfer of disk ownership.

To prepare source machines, do the following:

1. Identify disk ownership: Sign in to one of the cluster nodes and open Failover
Cluster Manager. Identify the owner node for the disks to determine the disks that
need to be migrated with each server.

2. Retrieve cluster information: Run the Get-ClusterInfo.ps1 script on a cluster


node to retrieve information on the cluster resources. The script will output the
role name, resource name, IP, and probe port in the Cluster-Config.csv file. Use
this CSV file to create and assign resource in Azure later in this article.

PowerShell

./Get-ClusterInfo.ps1

Create load balancer


For the cluster and cluster roles to respond properly to requests, an Azure Load balancer
is required. Without a load balancer, the other VMs are unable to reach the cluster IP
address as it's not recognized as belonging to the network or the cluster.

1. Fill out the columns in the Cluster-Config.csv file:

Column Description
header

NewIP Specify the IP address in the Azure virtual network (or subnet) for each
resource in the CSV file.

ServicePort Specify the service port to be used by each resource in the CSV file. For SQL
cluster resource, use the same value for service port as the probe port in the
CSV. For other cluster roles, the default values used are 1433 but you can
continue to use the port numbers that are configured in your current setup.

2. Run the Create-ClusterLoadBalancer.ps1 script to create the load balancer using


the following mandatory parameters:
Parameter Type Description

ConfigFilePath Mandatory Specify the path for the Cluster-Config.csv file


that you have filled out in the previous step.

ResourceGroupName Mandatory Specify the name of the resource Group in which


the load balancer is to be created.

VNetName Mandatory Specify the name of the Azure virtual network


that the load balancer will be associated to.

SubnetName Mandatory Specify the name of the subnet in the Azure


virtual network that the load balancer will be
associated to.

VNetResourceGroupName Mandatory Specify the name of the resource group for the
Azure virtual network that the load balancer will
be associated to.

Location Mandatory Specify the location in which the load balancer


should be created.

LoadBalancerName Mandatory Specify the name of the load balancer to be


created.

PowerShell

./Create-ClusterLoadBalancer.ps1 -ConfigFilePath ./cluster-config.csv -


ResourceGroupName $resoucegroupname -VNetName $vnetname -subnetName
$subnetname -VnetResourceGroupName $vnetresourcegroupname -Location
"eastus" -LoadBalancerName $loadbalancername

Replicate machines
Now, select machines for migration. You can replicate up to 10 machines together. If
you need to replicate more, then replicate them simultaneously in batches of 10.

1. In the Azure Migrate project > Servers, Azure Migrate: Server Migration, select
Replicate.
2. In Replicate, > Source settings > Are your machines virtualized?, select Physical
or other (AWS, GCP, Xen, etc.).

3. In On-premises appliance, select the name of the Azure Migrate appliance that
you set up.

4. In Process Server, select the name of the replication appliance.

5. In Guest credentials, select the dummy account created previously during the
replication installer setup. Then select Next: Virtual machines.
6. In Virtual Machines, in Import migration settings from an assessment?, leave the
default setting No, I'll specify the migration settings manually.

7. Check each VM you want to migrate. Then select Next: Target settings.

8. In Target settings, select the subscription, and target region to which you'll
migrate, and specify the resource group in which the Azure VMs will reside after
migration.
9. In Virtual Network, select the Azure VNet/subnet to which the Azure VMs will be
joined after migration.

10. In Availability options, select:

Availability Zone to pin the migrated machine to a specific Availability Zone in


the region. Use this option to distribute servers that form a multi-node
application tier across Availability Zones. If you select this option, you'll need
to specify the Availability Zone to use for each of the selected machine in the
Compute tab. This option is only available if the target region selected for the
migration supports Availability Zones
Availability Set to place the migrated machine in an Availability Set. The
target Resource Group that was selected must have one or more availability
sets in order to use this option.
No infrastructure redundancy required option if you don't need either of
these availability configurations for the migrated machines.

11. In Disk encryption type, select:

Encryption-at-rest with platform-managed key


Encryption-at-rest with customer-managed key
Double encryption with platform-managed and customer-managed keys

7 Note

To replicate VMs with CMK, you'll need to create a disk encryption set under
the target Resource Group. A disk encryption set object maps Managed Disks
to a Key Vault that contains the CMK to use for SSE.

12. In Azure Hybrid Benefit:

Select No if you don't want to apply Azure Hybrid Benefit. Then select Next.
Select Yes if you have Windows Server machines that are covered with active
Software Assurance or Windows Server subscriptions, and you want to apply
the benefit to the machines you're migrating. Then select Next.
13. In Compute, review the VM name, size, OS disk type, and availability configuration
(if selected in the previous step). VMs must conform with Azure requirements.

VM size: If you're using assessment recommendations, the VM size


dropdown shows the recommended size. Otherwise Azure Migrate picks a
size based on the closest match in the Azure subscription. Alternatively, pick a
manual size in Azure VM size.
OS disk: Specify the OS (boot) disk for the VM. The OS disk is the disk that
has the operating system bootloader and installer.
Availability Zone: Specify the Availability Zone to use.
Availability Set: Specify the Availability Set to use.
14. In Disks, specify whether the VM disks should be replicated to Azure, and select
the disk type (standard SSD/HDD or premium managed disks) in Azure. Then
select Next.

Use the list that you had made earlier to select the disks to be replicated with
each server. Exclude other disks from replication.

15. In Review and start replication, review the settings, and select Replicate to start
the initial replication for the servers.

7 Note

You can update replication settings any time before replication starts, Manage >
Replicating machines. Settings can't be changed after replication starts.

Track and monitor


Replication proceeds in the following sequence:

When you select Replicate a Start Replication job begins.


When the Start Replication job finishes successfully, the machines begin their initial
replication to Azure.
After initial replication finishes, delta replication begins. Incremental changes to
on-premises disks are periodically replicated to the replica disks in Azure.
After the initial replication is completed, configure the Compute and Network
items for each VM. Clusters typically have multiple NICs but only one NIC is
required for the migration (set the others as do not create).

You can track job status in the portal notifications.

You can monitor replication status by selecting on Replicating servers in Azure Migrate:
Server Migration.

Migrate VMs
After machines are replicated, they are ready for migration. To migrate your servers,
follow these steps:

1. In the Azure Migrate project > Servers > Azure Migrate: Server Migration, select
Replicating servers.
2. To ensure that the migrated server is synchronized with the source server, stop the
SQL Server resource (in Failover Cluster Manager > Roles > Other resources)
while ensuring that the cluster disks are online.

3. In Replicating machines > select server name > Overview, ensure that the last
synchronized timestamp is after you have stopped SQL Server resource on the
servers to be migrated before you move onto the next step. This should only take
a few of minutes.

4. In Replicating machines, right-click the VM > Migrate.

5. In Migrate > Shut down virtual machines and perform a planned migration with
no data loss, select No > OK.

7 Note

For Physical Server Migration, shut down of source machine is not supported
automatically. The recommendation is to bring the application down as part
of the migration window (don't let the applications accept any connections)
and then initiate the migration (the server needs to be kept running, so
remaining changes can be synchronized) before the migration is completed.
6. A migration job starts for the VM. Track the job in Azure notifications.

7. After the job finishes, you can view and manage the VM from the Virtual Machines
page.

Reconfigure cluster
After your VMs have migrated, reconfigure the cluster. Follow these steps:

1. Shut down the migrated servers in Azure.

2. Add the migrated machines to the backend pool of the load balancer. Navigate to
Load Balancer > Backend pools.

3. Select the backend pool, and add the migrated machines.

4. Reconfigure the migrated disks of the servers as shared disks by running the
Create-SharedDisks.ps1 script. The script is interactive and will prompt for a list of
machines and then show available disks to be extracted (only data disks). You will
be prompted once to select which machines contain the drives to be turned into
shared disks. Once selected, you will be prompted again, once per machine, to pick
the specific disks.

Parameter Type Description

ResourceGroupName Mandatory Specify the name of the resource group containing


the migrated servers.

NumberofNodes Optional Specify the number of nodes in your failover cluster


instance. This parameter is used to identify the right
SKU for the shared disks to be created. By default, the
script assumes the number of nodes in the cluster to
be 2.

DiskNamePrefix Optional Specify the prefix that you'd want to add to the names
of your shared disks.

PowerShell

./Create-SharedDisks.ps1 -ResourceGroupName $resoucegroupname -


NumberofNodes $nodesincluster -DiskNamePrefix $disknameprefix

5. Attach the shared disks to the migrated servers by running the Attach-
SharedDisks.ps1 script.
Parameter Type Description

ResourceGroupName Mandatory Specify the name of the resource group containing


the migrated servers.

StartingLunNumber Optional Specify the starting LUN number that is available for
the shared disks to be attached to. By default, the
script tries to attach shared disks to LUN starting 0.

PowerShell

./Attach-ShareDisks.ps1 -ResourceGroupName $resoucegroupname

6. Start the migrated servers in Azure and sign in to any node.

7. Copy the Cluster-Config.csv file and run the Update-ClusterConfig.ps1 script


passing the CSV as a parameter. This will ensure the cluster resources are updated
with the new configuration for the cluster to work in Azure.

PowerShell

./Update-ClusterConfig.ps1 -ConfigFilePath $filepath

Your SQL Server failover cluster instance is ready.

Complete the migration


1. After the migration is done, right-click the VM > Stop migration. This does the
following:

Stops replication for the on-premises machine.


Removes the machine from the Replicating servers count in Azure Migrate:
Server Migration.
Cleans up replication state information for the machine.

2. Install the Azure VM Windows agent on the migrated machines.


3. Perform any post-migration app tweaks, such as updating database connection
strings, and web server configurations.
4. Perform final application and migration acceptance testing on the migrated
application now running in Azure.
5. Cut over traffic to the migrated Azure VM instance.
6. Remove the on-premises VMs from your local VM inventory.
7. Remove the on-premises VMs from local backups.
8. Update any internal documentation to show the new location and IP address of
the Azure VMs.

Post-migration best practices


For SQL Server:
Install SQL Server IaaS Agent extension to automate management and
administration tasks. The SQL IaaS Agent extension only supports limited
functionality on SQL Server failover clustered instances.
Optimize SQL Server performance on Azure VMs.
Understand pricing for SQL Server on Azure.
For increased security:
Lock down and limit inbound traffic access with Microsoft Defender for Cloud -
Just in time administration.
Restrict network traffic to management endpoints with Network Security
Groups.
Deploy Azure Disk Encryption to help secure disks, and keep data safe from
theft and unauthorized access.
Read more about securing IaaS resources , and visit the Microsoft Defender
for Cloud .
For monitoring and management:
Consider deploying Azure Cost Management to monitor resource usage and
spending.

Next steps
Investigate the cloud migration journey in the Azure Cloud Adoption Framework.
Prerequisites: Migrate to SQL Server VM
using distributed AG
Article • 08/30/2022

Use a distributed availability group (AG) to migrate either a standalone instance of SQL
Server or an Always On availability group to SQL Server on Azure Virtual Machines
(VMs).

This article describes the prerequisites to prepare your source and target environments
to migrate your SQL Server instance or availability group to SQL Server VMs using a
distributed ag.

Migrating a database (or multiple databases) from a standalone instance using a


distributed availability group is a simple solution that does not require a Windows
Server Failover Cluster, or an availability group listener on either the source or the
target. Migrating an availability group requires a cluster, and a listener on both source
and target.

Source SQL Server


To migrate your instance or availability group, your source SQL Server should meet the
following prerequisites:

For a standalone instance migration, the minimum supported version is SQL Server
2017. For an availability group migration, SQL Server 2016 or later is supported.
Your SQL Server edition should be enterprise.
You must enable the Always On feature.
The databases you intend to migrate have been backed up in full mode.
If you already have an availability group, it must be in a healthy state. If you create
an availability group as part of this process, it must be in a healthy state before you
start the migration.
Ports used by the SQL Server instance (1433 by default) and the database
mirroring endpoint (5022 by default) must be open in the firewall. To migrate
databases in an availability group, make sure the port used by the listener is also
open in the firewall.

Target SQL Server VM


Before your target SQL Server VMs are ready for migration, make sure they meet the
following prerequisites:

The Azure account performing the migration is assigned as the owner or


contributor to the resource group that contains target the SQL Server VMs.
To use automatic seeding to create your distributed availability group (DAG), the
instance name for the global primary (source) of the DAG must match the instance
name of the forwarder (target) of the DAG. If there is an instance name mismatch
between the global primary and forwarder, then you must use manual seeding to
create the DAG, and manually add any additional database files in the future.
For simplicity, the target SQL Server instance should match the version of the
source SQL Server instance. If you choose to upgrade during the migration process
by using a higher version of SQL Server on the target, then you will need to
manually seed your database rather than relying on autoseeding as is provided in
this series of articles. Review Migrate to higher SQL Server versions for more
details.
The SQL Server edition should be enterprise.
You must enable the Always On feature.
Ports used by the SQL Server instance (1433 by default) and the database
mirroring endpoint (5022 by default) must be open in the firewall. To migrate
databases in an availability group, make sure the port used by the listener is also
open in the firewall.

Connectivity
The source and target SQL Server instance must have an established network
connection.

If the source SQL Server instance is located on an on-premises network, configure a


Site-to-site VPN connection or an Azure ExpressRoute connection between the on-
premises network and the virtual network where your target SQL Server VM resides.

If your source SQL Server instance is located on an Azure virtual network that is different
than the target SQL Server VM, then configure virtual network peering.

Authentication
To simplify authentication between your source and target SQL Server instance, join
both servers to the same domain, preferably with the domain being on the source side
and apply domain-based authentication. Since this is the recommended approach, the
steps in this tutorial series assume both source and target SQL Server instance are part
of the same domain.

If the source and target servers are part of different domains, configure federation
between the two domains, or configure a domain-independent availability group.

Next steps
Once you have configured both source and target environment to meet the
prerequisites, you're ready to migrate either your standalone instance of SQL Server or
an Always On availability group to your target SQL Server VM(s).
Use distributed AG to migrate databases
from a standalone instance
Article • 08/30/2022

Use a distributed availability group (AG) to migrate a database (or multiple databases)
from a standalone instance of SQL Server to SQL Server on Azure Virtual Machines
(VMs).

Once you've validated your source SQL Server instance meets the prerequisites, follow
the steps in this article to create an availability group on your standalone SQL Server
instance and migrate your database (or group of databases) to your SQL Server VM in
Azure.

This article is intended for databases on a standalone instance of SQL Server. This
solution does not require a Windows Server Failover Cluster (WSFC) or an availability
group listener. It's also possible to migrate databases in an availability group.

Initial setup
The first step is to create your SQL Server VM in Azure. You can do so by using the Azure
portal, Azure PowerShell, or an ARM template.

Be sure to configure your SQL Server VM according to the prerequisites.

For simplicity, join your target SQL Server VM to the same domain as your source SQL
Server. Otherwise, join your target SQL Server VM to a domain that's federated with the
domain of your source SQL Server.
To use automatic seeding to create your distributed availability group (DAG), the
instance name for the global primary (source) of the DAG must match the instance
name of the forwarder (target) of the DAG. If there is an instance name mismatch
between the global primary and forwarder, then you must use manual seeding to create
the DAG, and manually add any additional database files in the future.

This article uses the following example parameters:

Database name: Adventureworks


Source machine name (global primary in DAG): OnPremNode
Source SQL Server instance name: MSSQLSERVER
Source availability group name: OnPremAg
Target SQL Server VM name (forwarder in DAG): SQLVM
Target SQL Server on Azure VM instance name: MSSQLSERVER
Target availability group name: AzureAG
Endpoint name: Hadr_endpoint
Distributed availability group name: DAG
Domain name: Contoso

Create endpoints
Use Transact-SQL (T-SQL) to create endpoints on both your source (OnPremNode) and
target (SQLVM) SQL Server instances.

To create your endpoints, run this T-SQL script on both source and target servers:

SQL

CREATE ENDPOINT [Hadr_endpoint]

STATE=STARTED

AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)

FOR DATA_MIRRORING (

ROLE = ALL,

AUTHENTICATION = WINDOWS NEGOTIATE,

ENCRYPTION = REQUIRED ALGORITHM AES

GO

Domain accounts automatically have access to endpoints, but service accounts may not
automatically be part of the sysadmin group and may not have connect permission. To
manually grant the SQL Server service account connect permission to the endpoint, run
the following T-SQL script on both servers:

SQL
GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [<your account>]

Create source AG
Since a distributed availability group is a special availability group that spans across two
individual availability groups, you first need to create an availability group on the source
SQL Server instance. If you already have an availability group that you would like to
maintain in Azure, then migrate your availability group instead.

Use Transact-SQL (T-SQL) to create an availability group (OnPremAg) on the source


(OnPremNode) instance for the example Adventureworks database.

To create the availability group, run this script on the source:

SQL

CREATE AVAILABILITY GROUP [OnPremAG]

WITH (AUTOMATED_BACKUP_PREFERENCE = PRIMARY,

DB_FAILOVER = OFF,

DTC_SUPPORT = NONE,

CLUSTER_TYPE=NONE )

FOR DATABASE [Adventureworks]

REPLICA ON N'OnPremNode'

WITH (ENDPOINT_URL = N'TCP://OnPremNode.contoso.com:5022', FAILOVER_MODE =


MANUAL,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

SEEDING_MODE = AUTOMATIC, SECONDARY_ROLE(ALLOW_CONNECTIONS = NO));

GO

Create target AG
You also need to create an availability group on the target SQL Server VM as well.

Use Transact-SQL (T-SQL) to create an availability group (AzureAG) on the target


(SQLVM) instance.

To create the availability group, run this script on the target:

SQL

CREATE AVAILABILITY GROUP [AzureAG]

WITH (AUTOMATED_BACKUP_PREFERENCE = PRIMARY,

DB_FAILOVER = OFF,

DTC_SUPPORT = NONE,

CLUSTER_TYPE=NONE,

REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = 0)

FOR REPLICA ON N'SQLVM'

WITH (ENDPOINT_URL = N'TCP://SQLVM.contoso.com:5022', FAILOVER_MODE =


MANUAL,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

SEEDING_MODE = AUTOMATIC,SECONDARY_ROLE(ALLOW_CONNECTIONS = NO));

GO

Create distributed AG
After you have your source (OnPremAG) and target (AzureAG) availability groups
configured, create your distributed availability group to span both individual availability
groups.

Use Transact-SQL on the source SQL Server instance (OnPremNode) and AG


(OnPremAG) to create the distributed availability group (DAG).

To create the distributed AG, run this script on the source:

SQL

CREATE AVAILABILITY GROUP [DAG]

WITH (DISTRIBUTED)

AVAILABILITY GROUP ON

'OnPremAG' WITH

LISTENER_URL = 'tcp://OnPremNode.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

),

'AzureAG' WITH

LISTENER_URL = 'tcp://SQLVM.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

);

GO

7 Note
The seeding mode is set to AUTOMATIC as the version of SQL Server on the target
and source is the same. If your SQL Server target is a higher version, or if your
global primary and forwarder have different instance names, then create the
distributed ag, and join the secondary AG to the distributed ag with
SEEDING_MODE set to MANUAL . Then manually restore your databases from the
source to the target SQL Server instance. Review upgrading versions during
migration to learn more.

After your distributed AG is created, join the target AG (AzureAG) on the target instance
(SQLVM) to the distributed AG (DAG).

To join the target AG to the distributed AG, run this script on the target:

SQL

ALTER AVAILABILITY GROUP [DAG]

JOIN

AVAILABILITY GROUP ON

'OnPremAG' WITH

(LISTENER_URL = 'tcp://OnPremNode.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

),

'AzureAG' WITH

(LISTENER_URL = 'tcp://SQLVM.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

);

GO

If you need to cancel, pause, or delay synchronization between the source and target
availability groups (such as, for example, performance issues), run this script on the
source global primary instance (OnPremNode):

SQL

ALTER AVAILABILITY GROUP [DAG]

MODIFY

AVAILABILITY GROUP ON

'AzureAG' WITH

( SEEDING_MODE = MANUAL );

To learn more, review cancel automatic seeding to forwarder.


Next steps
After your distributed availability group is created, you are ready to complete the
migration.
Use distributed AG to migrate
availability group
Article • 08/30/2022

Use a distributed availability group (AG) to migrate databases in an Always On


availability group while maintaining high availability and disaster recovery (HADR)
support post migration on your SQL Server on Azure Virtual Machines (VMs).

Once you've validated your source SQL Server instances meet the prerequisites, follow
the steps in this article to create a distributed availability between your existing
availability group, and your target availability group on your SQL Server on Azure VMs.

This article is intended for databases participating in an availability group, and requires a
Windows Server Failover Cluster (WSFC) and an availability group listener. It's also
possible to migrate databases from a standalone SQL Server instance.

Initial setup
The first step is to create your SQL Server VMs in Azure. You can do so by using the
Azure portal, Azure PowerShell, or an ARM template.

Be sure to configure your SQL Server VMs according to the prerequisites. Choose
between a single subnet deployment, which relies on an Azure Load Balancer or
distributed network name to route traffic to your availability group listener, or a multi-
subnet deployment which does not have such a requirement. The multi-subnet
deployment is recommended. To learn more, see connectivity.

For simplicity, join your target SQL Server VMs to the same domain as your source SQL
Server instances. Otherwise, join your target SQL Server VM to a domain that's federated
with the domain of your source SQL Server instances.

To use automatic seeding to create your distributed availability group (DAG), the
instance name for the global primary (source) of the DAG must match the instance
name of the forwarder (target) of the DAG. If there is an instance name mismatch
between the global primary and forwarder, then you must use manual seeding to create
the DAG, and manually add any additional database files in the future.

This article uses the following example parameters:

Database name: Adventureworks


Source machine names : OnPremNode1 (global primary in DAG), OnPremNode2
Source SQL Server instance names: MSSQLSERVER, MSSQLSERVER
Source availability group name : OnPremAg
Source availability group listener name: OnPremAG_LST
Target SQL Server VM names: SQLVM1 (forwarder in DAG), SQLVM2
Target SQL Server on Azure VM instance names: MSSQLSERVER, MSSQLSERVER
Target availability group name: AzureAG
Source availability group listener name: AzureAG_LST
Endpoint name: Hadr_endpoint
Distributed availability group name: DAG
Domain name: Contoso

Create endpoints
Use Transact-SQL (T-SQL) to create endpoints on both your two source instances
(OnPremNode1, OnPremNode2) and target SQL Server instances (SQLVM1, SQLVM2).

If you already have an availability group configured on the source instances, only run
this script on the two target instances.

To create your endpoints, run this T-SQL script on both source and target servers:

SQL

CREATE ENDPOINT [Hadr_endpoint]

STATE=STARTED

AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)

FOR DATA_MIRRORING (

ROLE = ALL,

AUTHENTICATION = WINDOWS NEGOTIATE,

ENCRYPTION = REQUIRED ALGORITHM AES

GO

Domain accounts automatically have access to endpoints, but service accounts may not
automatically be part of the sysadmin group and may not have connect permission. To
manually grant the SQL Server service account connect permission to the endpoint, run
the following T-SQL script on both servers:

SQL

GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [<your account>]

Create source AG
Since a distributed availability group is a special availability group that spans across two
individual availability groups, you first need to create an availability group on the two
source SQL Server instances.

If you already have an availability group on your source instances, skip this section.

Use Transact-SQL (T-SQL) to create an availability group (OnPremAG) between your two
source instances (OnPremNode1, OnPremNode2) for the example Adventureworks
database.

To create the availability group on the source instances, run this script on the source
primary replica (OnPremNode1):

SQL

CREATE AVAILABILITY GROUP [OnPremAG]

WITH ( AUTOMATED_BACKUP_PREFERENCE = PRIMARY,

DB_FAILOVER = OFF,

DTC_SUPPORT = NONE )

FOR DATABASE [Adventureworks]

REPLICA ON

N'OnPremNode1' WITH (ENDPOINT_URL =


N'TCP://OnPremNode1.contoso.com:5022',

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

SEEDING_MODE = AUTOMATIC,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO)),

N'OnPremNode2' WITH (ENDPOINT_URL =


N'TCP://OnPremNode2.contoso.com:5022',

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

SEEDING_MODE = AUTOMATIC,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO));

Next, to join the secondary replica (OnPremNode2) to the availability group


(OnPremAg).

To join the availability group, run this script on the source secondary replica:

SQL

ALTER AVAILABILITY GROUP [OnPremAG] JOIN;

GO

ALTER AVAILABILITY GROUP [OnPremAG] GRANT CREATE ANY DATABASE;

GO

Finally, create the listener for your global forwarder availability group (OnPremAG).

To create the listener, run this script on the source primary replica:

SQL

USE [master]

GO

ALTER AVAILABILITY GROUP [OnPremAG]

ADD LISTENER N'OnPremAG_LST' (

WITH IP ((<available static ip>, <mask>)

, PORT=60173);

GO

Create target AG
You also need to create an availability group on the target SQL Server VMs as well.

If you already have an availability group configured between your SQL Server instances
in Azure, skip this section.

Use Transact-SQL (T-SQL) to create an availability group (AzureAG) on the target SQL
Server instances (SQLVM1 and SQLVM2).

To create the availability group on the target, run this script on the target primary
replica:

SQL

CREATE AVAILABILITY GROUP [AzureAG]

FOR

REPLICA ON N'SQLVM1' WITH (ENDPOINT_URL =


N'TCP://SQLVM1.contoso.com:5022',

FAILOVER_MODE = MANUAL,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC),

N'SQLVM2' WITH (ENDPOINT_URL = N'TCP://SQLVM2.contoso.com:5022',

FAILOVER_MODE = MANUAL,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC);

GO

Next, join the target secondary replica (SQLVM2) to the availability group (AzureAG).

Run this script on the target secondary replica:

SQL

ALTER AVAILABILITY GROUP [AzureAG] JOIN;

GO

ALTER AVAILABILITY GROUP [AzureAG] GRANT CREATE ANY DATABASE;

GO

Finally, create a listener (AzureAG_LST) for your target availability group (AzureAG). If
you deployed your SQL Server VMs to multiple subnets, create your listener using
Transact-SQL. If you deployed your SQL Server VMs to a single subnet, configure either
an Azure Load Balancer, or a distributed network name for your listener.

To create your listener, run this script on the primary replica of the availability group in
Azure.

SQL

ALTER AVAILABILITY GROUP [AzureAG]

ADD LISTENER N'AzureAG_LST' (

WITH IP

( (N'<primary replica's secondary ip >', N'<primary mask>'), (N'<secondary


replica's secondary ip>', N'<secondary mask>') )

, PORT=<port number you set>);

GO

Create distributed AG
After you have your source (OnPremAG) and target (AzureAG) availability groups
configured, create your distributed availability group to span both individual availability
groups.
Use Transact-SQL on the source SQL Server global primary (OnPremNode1) and AG
(OnPremAG) to create the distributed availability group (DAG).

To create the distributed AG on the source, run this script on the source global primary:

SQL

CREATE AVAILABILITY GROUP [DAG]

WITH (DISTRIBUTED)

AVAILABILITY GROUP ON

'OnPremAG' WITH

LISTENER_URL = 'tcp://OnPremAG_LST.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

),

'AzureAG' WITH

LISTENER_URL = 'tcp://AzureAG_LST.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

);

GO

7 Note

The seeding mode is set to AUTOMATIC as the version of SQL Server on the target
and source is the same. If your SQL Server target is a higher version, or if your
global primary and forwarder have different instance names, then create the
distributed ag, and join the secondary AG to the distributed ag with
SEEDING_MODE set to MANUAL . Then manually restore your databases from the
source to the target SQL Server instance. Review upgrading versions during
migration to learn more.

After your distributed AG is created, join the target AG (AzureAG) on the target
forwarder instance (SQLVM1) to the distributed AG (DAG).

To join the target AG to the distributed AG, run this script on the target forwarder:

SQL

ALTER AVAILABILITY GROUP [DAG]

JOIN

AVAILABILITY GROUP ON

'OnPremAG' WITH

LISTENER_URL = 'tcp://OnPremAG_LST.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

),

'AzureAG' WITH

LISTENER_URL = 'tcp://AzureAG_LST.contoso.com:5022',

AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,

FAILOVER_MODE = MANUAL,

SEEDING_MODE = AUTOMATIC

);

GO

If you need to cancel, pause, or delay synchronization between the source and target
availability groups (such as, for example, performance issues), run this script on the
source global primary instance (OnPremNode1):

SQL

ALTER AVAILABILITY GROUP [DAG]

MODIFY

AVAILABILITY GROUP ON

'AzureAG' WITH

( SEEDING_MODE = MANUAL );

To learn more, review cancel automatic seeding to forwarder.

Next steps
After your distributed availability group is created, you are ready to complete the
migration.
Complete migration using a distributed
AG
Article • 09/29/2022

Use a distributed availability group (AG) to migrate your databases from SQL Server to
SQL Server on Azure Virtual Machines (VMs).

This article assumes you've already configured your distributed AG for either your
standalone databases or your availability group databases and now you're ready to
finalize the migration to SQL Server on Azure VMs.

Monitor migration
Use Transact-SQL (T-SQL) to monitor the progress of your migration.

Run the following script on the global primary and the forwarder and validate that the
state for synchronization_state_desc for the primary availability group (OnPremAG)
and the secondary availability group (AzureAG) is SYNCHRONIZED . Confirm that the
synchronization_state_desc for the distributed AG (DAG) is synchronizing and the
last_hardened_lsn is the same per database on both the global primary and the

forwarder.

If not, rerun the query on both sides every 5 seconds or so until it is the case.

Use the following script to monitor the migration:

SQL

SELECT ag.name

, drs.database_id

, db_name(drs.database_id) as database_name

, drs.group_id

, drs.replica_id

, drs.synchronization_state_desc

, drs.last_hardened_lsn
FROM sys.dm_hadr_database_replica_states drs

INNER JOIN sys.availability_groups ag on drs.group_id = ag.group_id;

Complete migration
Once you've validated the states of the availability group and the distributed AG, you're
ready to complete the migration. This consists of failing over the distributed AG to the
forwarder (the target SQL Server in Azure), and then cutting over the application to the
new primary on the Azure side.

To failover your distributed availability group, review failover to secondary availability


group.

After the failover, update the connection string of your application to connect to the
new primary replica in Azure. At this point, you can choose to maintain the distributed
availability group, or use DROP AVAILABILITY GROUP [DAG] on both the source and target
SQL Server instances to drop it.

If your domain controller is on the source side, validate that your target SQL Server VMs
in Azure have joined the domain before abandoning the source SQL Server instances.
Don't delete the domain controller on the source side until you create a domain on the
source side in Azure and add your SQL Server VMs to this new domain.

Next steps
For a tutorial showing you how to migrate a database to SQL Server on Azure Virtual
Machines using the T-SQL RESTORE command, see Migration guide: SQL Server to SQL
Server on Azure Virtual Machines.

For information about SQL Server on Azure Virtual Machines, see the Overview.

For information about connecting apps to SQL Server on Azure Virtual Machines,
see Connect applications.
Azure SQL glossary of terms
Article • 02/13/2023

Applies to:
Azure SQL Database
Azure SQL Managed Instance
SQL Server
on Azure VM

Azure SQL Database


Context Term Definition

Azure service Azure SQL Azure SQL Database is a fully managed platform as a service
Database (PaaS) database that handles most database management
functions such as upgrading, patching, backups, and
monitoring without user involvement.

Database The database engine used in Azure SQL Database is the most
engine recent stable version of the same database engine shipped as
the Microsoft SQL Server product. Some database engine
features are exclusive to Azure SQL Database or are available
before they are shipped with SQL Server. The database engine
is configured and optimized for use in the cloud. In addition to
core database functionality, Azure SQL Database provides
cloud-native capabilities such as Hyperscale and serverless
compute.

Server entity Logical server A logical server is a construct that acts as a central
administrative point for a collection of databases in Azure SQL
Database and Azure Synapse Analytics. All databases managed
by a server are created in the same region as the server. A
server is a purely logical concept: a logical server is not a
machine running an instance of the database engine. There is
no instance-level access or instance features for a server.

Deployment Databases may be deployed individually or as part of an elastic


option pool. You may move existing databases in and out of elastic
pools.

Elastic pool Elastic pools are a simple, cost-effective solution for managing
and scaling multiple databases that have varying and
unpredictable usage demands. The databases in an elastic pool
are on a single logical server. The databases share a set
allocation of resources at a set price.
Context Term Definition

Single database If you deploy single databases, each database is isolated, using
a dedicated database engine. Each has its own service tier
within your selected purchasing model and a compute size
defining the resources allocated to the database engine.

Purchasing Azure SQL Database has two purchasing models. The


model purchasing model defines how you scale your database and
how you are billed for compute, storage, etc.

DTU-based The Database Transaction Unit (DTU)-based purchasing model


purchasing is based on a bundled measure of compute, storage, and I/O
model resources. Compute sizes are expressed in DTUs for single
databases and in elastic database transaction units (eDTUs) for
elastic pools.

vCore-based A virtual core (vCore) represents a logical CPU. The vCore-


purchasing based purchasing model offers greater control over the
model hardware configuration to better match compute and memory
(recommended) requirements of the workload, pricing discounts for Azure
Hybrid Benefit (AHB) and Reserved Instance (RI), more granular
scaling, and greater transparency in hardware details. Newer
capabilities (for example, Hyperscale, serverless) are only
available in the vCore model.

Service tier The service tier defines the storage architecture, storage and
I/O limits, and business continuity options. Options for service
tiers vary by purchasing model.

DTU-based Basic, standard, and premium service tiers are available in the
service tiers DTU-based purchasing model.

vCore-based General purpose, Business Critical, and Hyperscale service tiers


service tiers are available in the vCore-based purchasing model
(recommended) (recommended).

Compute tier The compute tier determines whether resources are


continuously available (provisioned) or autoscaled (serverless).
Compute tier availability varies by purchasing model and
service tier. Only the vCore purchasing model's General
Purpose service tier makes serverless compute available.

Provisioned The provisioned compute tier provides a specific amount of


compute compute resources that are continuously provisioned
independent of workload activity. Under the provisioned
compute tier, you are billed at a fixed price per hour.
Context Term Definition

Serverless The serverless compute tier autoscales compute resources


compute based on workload activity and bills for the amount of
compute used per second. Azure SQL Database serverless is
currently available in the vCore purchasing model's General
Purpose service tier with standard-series (Gen5) hardware or
newer.

Hardware Available The vCore-based purchasing model allows you to select the
configuration hardware appropriate hardware configuration for your workload.
configurations Hardware configuration options include standard series (Gen5),
M-series, Fsv2-series, and DC-series.

Compute Compute size (service objective) is the amount of CPU,


size (service memory, and storage resources available for a single database
objective) or elastic pool. Compute size also defines resource
consumption limits, such as maximum IOPS, maximum log rate,
etc.

vCore-based Configure the compute size for your database or elastic pool
sizing options by selecting the appropriate service tier, compute tier, and
hardware for your workload. When using an elastic pool,
configure the reserved vCores for the pool, and optionally
configure per-database settings. For sizing options and
resource limits in the vCore-based purchasing model, see
vCore single databases, and vCore elastic pools.

DTU-based Configure the compute size for your database or elastic pool
sizing options by selecting the appropriate service tier and selecting the
maximum data size and number of DTUs. When using an elastic
pool, configure the reserved eDTUs for the pool, and optionally
configure per-database settings. For sizing options and
resource limits in the DTU-based purchasing model, see DTU
single databases and DTU elastic pools.

Azure SQL Managed Instance


Context Term More information

Azure service Azure SQL Azure SQL Managed Instance is a fully managed platform as a
Managed service (PaaS) deployment option of Azure SQL. It gives you an
Instance instance of SQL Server, including the SQL Server Agent, but
removes much of the overhead of managing a virtual machine.
Most of the features available in SQL Server are available in SQL
Managed Instance. Compare the features in Azure SQL Database
and Azure SQL Managed Instance.
Context Term More information

Database The database engine used in Azure SQL Managed Instance has
engine near 100% compatibility with the latest SQL Server (Enterprise
Edition) database engine. Some database engine features are
exclusive to managed instances or are available in managed
instances before they are shipped with SQL Server. Managed
instances provide cloud-native capabilities and integrations such
as a native virtual network (VNet) implementation, automatic
patching and version updates, automated backups, and high
availability.

Server entity Managed Each managed instance is itself an instance of SQL Server.
instance Databases created on a managed instance are colocated with
one another, and you may run cross-database queries. You can
connect to the managed instance and use instance-level features
such as linked servers and the SQL Server Agent.

Deployment Managed instances may be deployed individually or as part of an


option instance pools (preview). Managed instances cannot currently be
moved into, between, or out of instance pools.

Single A single managed instance is deployed to a dedicated set of


instance isolated virtual machines that run inside the customer's virtual
network subnet. These machines form a virtual cluster. Multiple
managed instances can be deployed into a single virtual cluster
if desired.

Instance pool Instance pools enable you to deploy multiple managed instances
(preview) to the same virtual machine. Instance pools enable you to
migrate smaller and less compute-intensive workloads to the
cloud without consolidating them in a single larger managed
instance.

Purchasing vCore-based SQL Managed Instance is available under the vCore-based


model purchasing purchasing model. Azure Hybrid Benefit is available for managed
model instances.

Service tier vCore-based SQL Managed Instance offers two service tiers. Both service tiers
service tiers guarantee 99.99% availability and enable you to independently
select storage size and compute capacity. Select either the
General Purpose or Business Critical service tier for a managed
instance based upon your performance and latency
requirements.

Compute Provisioned SQL Managed Instance provides a specific amount of compute


compute resources that are continuously provisioned independent of
workload activity, and bills for the amount of compute
provisioned at a fixed price per hour.
Context Term More information

Hardware Available SQL Managed Instance hardware configurations include


configuration hardware standard-series (Gen5), premium-series, and memory optimized
configurations premium-series hardware.

Compute vCore-based Compute size (service objective) is the maximum amount of CPU,
size sizing options memory, and storage resources available for a single managed
instance or instance pool. Configure the compute size for your
managed instance by selecting the appropriate service tier and
hardware for your workload. Learn about resource limits for
managed instances.

SQL Server on Azure VMs


Context Term More information

Azure service SQL Server on SQL Server on Azure VMs enables you to use full versions of SQL
Azure Virtual Server in the cloud without having to manage any on-premises
Machines hardware. SQL Server VMs simplify licensing costs when you pay
(VMs) as you go. You have both SQL Server and OS access with some
automated manageability features for SQL Server VMs, such as
the SQL IaaS Agent extension.

Server entity Virtual Azure VMs run in many geographic regions around the world.
machine or They also offer various machine sizes. The virtual machine image
VM gallery allows you to create a SQL Server VM with the right
version, edition, and operating system.

Image Windows VMs You can choose to deploy SQL Server VMs with Windows-based
or Linux VMs images or Linux-based images. Image selection specifies both
the OS version and SQL Server edition for your SQL Server VM.

Pricing Pricing for SQL Server on Azure VMs is based on SQL Server
licensing, operating system (OS), and virtual machine cost. You
can reduce costs by optimizing your VM size and shutting down
your VM when possible.

SQL Server Choose the appropriate free or paid SQL Server edition for your
licensing cost usage and requirements. For paid editions, you may pay per
usage (also known as pay as you go) or use Azure Hybrid
Benefit.

OS and virtual OS and virtual machine cost is based upon factors including your
machine cost choice of image, VM size, and storage configuration.
Context Term More information

VM You need to configure settings including security, storage, and


configuration high availability/disaster recovery for your SQL Server VM. The
easiest way to configure a SQL Server VM is to use one of our
Marketplace images, but you can also use this quick checklist for
a series of best practices and guidelines to navigate these
choices.

VM size VM size determines processing power, memory, and storage


capacity. You can collect a performance baseline and/or use the
SKU recommendation tool to help select the best VM size for
your workload.

Storage Your storage configuration options are determined by your


configuration selection of VM size and selection of storage settings including
disk type, caching settings, and disk striping. Learn how to
choose a VM size with enough storage scalability for your
workload and a mixture of disks (usually in a storage pool) that
meet the capacity and performance requirements of your
business.

Security You can enable Microsoft Defender for SQL, integrate Azure Key
considerations Vault, control access, and secure connections to your SQL Server
VM. Learn security guidelines to establish secure access to SQL
Server VMs.

SQL IaaS The SQL IaaS Agent extension (SqlIaasExtension) runs on SQL
Agent Server VMs to automate management and administration tasks.
extension There's no extra cost associated with the extension.

Automated Automated Patching establishes a maintenance window for a


patching SQL Server VM when security updates will be automatically
applied by the SQL IaaS Agent extension. Note that there may
be other mechanisms for applying Automatic Updates. If you
configure automated patching using the SQL IaaS Agent
extension you should ensure that there are no other conflicting
update schedules.

Automated Automated Backup v2 automatically configures Managed


backup Backup to Microsoft Azure for all existing and new databases on
a SQL Server VM running SQL Server 2016 or later Standard,
Enterprise, or Developer editions.
Transact-SQL reference (Database
Engine)
Article • 07/12/2023

Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW) SQL Endpoint in
Microsoft Fabric Warehouse in Microsoft Fabric

This article gives the basics about how to find and use the Microsoft Transact-SQL (T-
SQL) reference articles. T-SQL is central to using Microsoft SQL products and services. All
tools and applications that communicate with a SQL Server database do so by sending
T-SQL commands.

T-SQL compliance with the SQL standard


For detailed technical documents about how certain standards are implemented in SQL
Server, see the Microsoft SQL Server Standards Support documentation.

Tools that use T-SQL


Some of the Microsoft tools that issue T-SQL commands are:

SQL Server Management Studio (SSMS)


Azure Data Studio
SQL Server Data Tools (SSDT)
sqlcmd

Locate the Transact-SQL reference articles


To find T-SQL articles, use search at the top right of this page, or use the table of
contents on the left side of the page. You can also type a T-SQL key word in the
Management Studio Query Editor window, and press F1.

Find system views


To find the system tables, views, functions, and procedures, see these links, which are in
the Using relational databases section of the SQL documentation.

System catalog Views


System compatibility views
System dynamic management views
System functions
System information schema views
System stored procedures
System tables

"Applies to" references


The T-SQL reference articles encompass multiple versions of SQL Server, starting with
2008, and the other Azure SQL services. Near the top of each article, is a section that
indicates which products and services support subject of the article.

For example, this article applies to all versions, and has the following label.

Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW)

Another example, the following label indicates an article that applies only to Azure
Synapse Analytics and Parallel Data Warehouse.

Applies to: Azure Synapse Analytics Analytics Platform System (PDW)

In some cases, the article is used by a product or service, but all of the arguments aren't
supported. In this case, other Applies to sections are inserted into the appropriate
argument descriptions in the body of the article.

Get help from Microsoft Q & A


For online help, see the Microsoft Q & A Transact-SQL Forum.

See other language references


The SQL docs include these other language references:

XQuery Language Reference


Integration Services Language Reference
Replication Language Reference
Analysis Services Language Reference

Next steps
Tutorial: Writing Transact-SQL Statements
Transact-SQL Syntax Conventions (Transact-SQL)
SQL
Reference

Commands
az sql Manage Azure SQL Databases and Data Warehouses.
Az.Sql
Reference

This topic displays help topics for the Azure SQL Database Cmdlets.

SQL
Add-AzSqlDatabaseToFailoverGroup Adds one or more databases to an Azure SQL
Database Failover Group.

Add-AzSqlElasticJobStep Adds a job step to a job

Add-AzSqlElasticJobTarget Adds a target to a target group

Add-AzSqlInstanceKeyVaultKey Adds a key vault key to the provided Managed


Instance.

Add- Adds a Transparent Data Encryption Certificate for


AzSqlManagedInstanceTransparentDataEncryptionCertificate the given managed instance

Add-AzSqlServerKeyVaultKey Adds a Key Vault key to a SQL server.

Add-AzSqlServerTransparentDataEncryptionCertificate Adds a Transparent Data Encryption Certificate for


the given SQL Server instance

Clear-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Clears the vulnerability assessment rule baseline.

Clear-AzSqlDatabaseVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a


database.

Clear- Clears the vulnerability assessment rule baseline.


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline

Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a


managed database.

Clear-AzSqlInstanceVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a


managed instance.

Clear-AzSqlServerVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a


server.

Complete-AzSqlInstanceDatabaseCopy Complete online copy operation of a managed


database.

Complete-AzSqlInstanceDatabaseLogReplay Completes Log Replay service for the given


database.

Complete-AzSqlInstanceDatabaseMove Complete online move operation of a managed


database.

Convert-AzSqlDatabaseVulnerabilityAssessmentScan Converts a vulnerability assessment scan results to


Excel format.
Convert-AzSqlInstanceDatabaseVulnerabilityAssessmentScan Converts a vulnerability assessment scan results to
Excel format.

Copy-AzSqlDatabaseLongTermRetentionBackup Copies a long term retention backup to a target


database.

Copy-AzSqlInstanceDatabase Copy managed database to another managed


instance.

Disable-AzSqlDatabaseLedgerDigestUpload Disables uploading ledger digests to Azure Blob


storage or to Azure Confidential Ledger.

Disable-AzSqlDatabaseSensitivityRecommendation Disables (dismisses) sensitivity recommendations


on columns in the database.

Disable-AzSqlInstanceActiveDirectoryOnlyAuthentication Disables Azure AD only authentication for a specific


SQL Managed Instance.

Disable-AzSqlInstanceAdvancedDataSecurity Disables Advanced Data Security on a managed


instance.

Disable-AzSqlInstanceDatabaseLedgerDigestUpload Disables uploading ledger digests to Azure Blob


storage or Azure Confidential Ledger in Azure SQL
Managed Instance.

Disable-AzSqlInstanceDatabaseSensitivityRecommendation Disables (dismisses) sensitivity recommendations


on columns in the Azure SQL Managed Instance
database.

Disable-AzSqlServerActiveDirectoryOnlyAuthentication Disables Azure AD only authentication for a specific


SQL Server.

Disable-AzSqlServerAdvancedDataSecurity Disables Advanced Data Security on a server.

Enable-AzSqlDatabaseLedgerDigestUpload Enables uploading ledger digests to an Azure


Storage account or to Azure Confidential Ledger.

Enable-AzSqlDatabaseSensitivityRecommendation Enables sensitivity recommendations on columns


(recommendations are enabled by default on all
columns) in the database.

Enable-AzSqlInstanceActiveDirectoryOnlyAuthentication Enables Azure AD only authentication for a specific


SQL Managed Instance.

Enable-AzSqlInstanceAdvancedDataSecurity Enables Advanced Data Security on a managed


instance.

Enable-AzSqlInstanceDatabaseLedgerDigestUpload Enables uploading ledger digests to an Azure


Storage account or Azure Confidential Ledger for a
database in an Azure SQL Managed Instance.

Enable-AzSqlInstanceDatabaseSensitivityRecommendation Enables sensitivity recommendations on columns


(recommendations are enabled by default on all
columns) in the Azure SQL Managed Instance
database.
Enable-AzSqlServerActiveDirectoryOnlyAuthentication Enables Azure AD only authentication for a specific
SQL Server.

Enable-AzSqlServerAdvancedDataSecurity Enables Advanced Data Security on a server.

Get-AzSqlCapability Gets SQL Database capabilities for the current


subscription.

Get-AzSqlDatabase Gets one or more databases.

Get-AzSqlDatabaseActivity Gets the status of database operations.

Get-AzSqlDatabaseAdvancedThreatProtectionSetting Gets the Advanced Threat Protection settings for a


database.

Get-AzSqlDatabaseAdvisor Gets one or more Advisors for an Azure SQL


Database.

Get-AzSqlDatabaseAudit Gets the auditing settings of an Azure SQL


database.

Get-AzSqlDatabaseBackupLongTermRetentionPolicy Gets a database long term retention policy.

Get-AzSqlDatabaseBackupShortTermRetentionPolicy Gets a backup short term retention policy.

Get-AzSqlDatabaseDataMaskingPolicy Gets the data masking policy for a database.

Get-AzSqlDatabaseDataMaskingRule Gets the data masking rules from a database.

Get-AzSqlDatabaseExpanded Gets a database and its expanded property values.

Get-AzSqlDatabaseFailoverGroup Gets or lists Azure SQL Database Failover Groups.

Get-AzSqlDatabaseGeoBackup Gets a geo-redundant backup of a database.

Get-AzSqlDatabaseGeoBackupPolicy Gets a database geo backup policy.

Get-AzSqlDatabaseImportExportStatus Gets the details of an import or export of an Azure


SQL Database.

Get-AzSqlDatabaseIndexRecommendation Gets the recommended index operations for a


server or database.

Get-AzSqlDatabaseInstanceFailoverGroup Gets or lists Instance Failover Groups.

Get-AzSqlDatabaseLedgerDigestUpload Gets the ledger digest upload settings of an Azure


SQL database.

Get-AzSqlDatabaseLongTermRetentionBackup Gets one or more long term retention backups.

Get-AzSqlDatabaseRecommendedAction Gets one or more recommended actions for an


Azure SQL Database Advisor.

Get-AzSqlDatabaseReplicationLink Gets the geo-replication links between an Azure


SQL Database and a resource group or SQL Server.

Get-AzSqlDatabaseRestorePoint Retrieves the distinct restore points from which a


SQL Data Warehouse can be restored.
Get-AzSqlDatabaseSensitivityClassification Gets the current information types and sensitivity
labels of columns in the database.

Get-AzSqlDatabaseSensitivityRecommendation Gets the recommended information types and


sensitivity labels of columns in the database.

Get-AzSqlDatabaseTransparentDataEncryption Gets the TDE state for a database.

Get-AzSqlDatabaseUpgradeHint Gets pricing tier hints for a database.

Get-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Gets the vulnerability assessment rule baseline.

Get-AzSqlDatabaseVulnerabilityAssessmentScanRecord Gets all vulnerability assessment scan record(s)


associated with a given database.

Get-AzSqlDatabaseVulnerabilityAssessmentSetting Gets the vulnerability assessment settings of a


database.

Get-AzSqlDeletedDatabaseBackup Gets a deleted database that you can restore.

Get-AzSqlDeletedInstanceDatabaseBackup Gets a deleted database that you can restore.

Get-AzSqlElasticJob Gets one or more jobs

Get-AzSqlElasticJobAgent Gets a Azure SQL Elastic Job agent

Get-AzSqlElasticJobCredential Gets one or more credentials

Get-AzSqlElasticJobExecution Gets one or more job executions

Get-AzSqlElasticJobStep Gets one or more job steps

Get-AzSqlElasticJobStepExecution Gets one or more job step executions

Get-AzSqlElasticJobTargetExecution Gets one or more job target executions

Get-AzSqlElasticJobTargetGroup Gets one or more job target groups

Get-AzSqlElasticPool Gets elastic pools and their property values in an


Azure SQL Database.

Get-AzSqlElasticPoolActivity Gets the status of operations on an elastic pool.

Get-AzSqlElasticPoolAdvisor Gets one or more Advisors for an Azure SQL Elastic


Pool.

Get-AzSqlElasticPoolDatabase Gets elastic databases in an elastic pool and their


property values.

Get-AzSqlElasticPoolRecommendation Gets elastic pool recommendations.

Get-AzSqlElasticPoolRecommendedAction Gets one or more recommended actions for an


Azure SQL Elastic Pool Advisor.

Get-AzSqlInstance Returns information about Azure SQL Managed


Database Instance.

Get-AzSqlInstanceActiveDirectoryAdministrator Gets information about an Azure AD administrator


for SQL Managed Instance.

Get-AzSqlInstanceActiveDirectoryOnlyAuthentication Gets Azure AD only authentication for a specific


SQL Managed Instance.

Get-AzSqlInstanceAdvancedDataSecurityPolicy Gets Advanced Data Security policy of a managed


instance.

Get-AzSqlInstanceAdvancedThreatProtectionSetting Gets the Advanced Threat Protection settings for a


managed instance.

Get-AzSqlInstanceDatabase Returns information about Azure SQL Managed


Instance database.

Get-AzSqlInstanceDatabaseAdvancedThreatProtectionSetting Gets the Advanced Threat Protection settings for a


managed database.

Get-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy Gets a managed database's long term retention


policy

Get-AzSqlInstanceDatabaseBackupShortTermRetentionPolicy Gets a backup short term retention policy.

Get-AzSqlInstanceDatabaseCopyOperation Get managed database copy operation details

Get-AzSqlInstanceDatabaseGeoBackup Returns information about Azure SQL Managed


Instance database redundant backup.

Get-AzSqlInstanceDatabaseLedgerDigestUpload Gets the ledger digest upload settings of a


database in Azure SQL Managed Instance.

Get-AzSqlInstanceDatabaseLogReplay Gets the Log Replay service status.

Get-AzSqlInstanceDatabaseLongTermRetentionBackup Gets long term retention backup(s).

Get-AzSqlInstanceDatabaseMoveOperation Get managed database move operation details

Get-AzSqlInstanceDatabaseSensitivityClassification Gets the current information types and sensitivity


labels of columns in the Azure SQL Managed
Instance database.

Get-AzSqlInstanceDatabaseSensitivityRecommendation Gets the recommended information types and


sensitivity labels of columns in the Azure SQL
Managed Instance database.

Get- Gets the vulnerability assessment rule baseline.


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline

Get- Gets all vulnerability assessment scan record(s)


AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord associated with a given managed database.

Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting Gets the vulnerability assessment settings of a


managed database.

Get-AzSqlInstanceDtc Gets an Azure SQL Managed Instance DTC.

Get-AzSqlInstanceEndpointCertificate Returns information about endpoint certificates.

Get-AzSqlInstanceKeyVaultKey Gets a SQL managed instance's Key Vault keys.


Get-AzSqlInstanceLink Returns information about link feature for Azure
SQL Managed Instance.

Get-AzSqlInstanceOperation Gets a SQL managed instance's operations.

Get-AzSqlInstancePool Returns information about the Azure SQL Instance


pool.

Get-AzSqlInstancePoolUsage Returns information about an Azure SQL Instance


pool's usage.

Get-AzSqlInstanceServerTrustCertificate Returns information about server trust certificate.

Get-AzSqlInstanceTransparentDataEncryptionProtector Gets the Transparent Data Encryption (TDE)


protector for a SQL managed instance.

Get-AzSqlInstanceVulnerabilityAssessmentSetting Gets the vulnerability assessment settings of a


managed instance.

Get-AzSqlServer Returns information about SQL Database servers.

Get-AzSqlServerActiveDirectoryAdministrator Gets information about an Azure AD administrator


for SQL Server.

Get-AzSqlServerActiveDirectoryOnlyAuthentication Gets Azure AD only authentication for a specific


SQL Server.

Get-AzSqlServerAdvancedDataSecurityPolicy Gets Advanced Data Security policy of a server.

Get-AzSqlServerAdvancedThreatProtectionSetting Gets the Advanced Threat Protection settings for a


server.

Get-AzSqlServerAdvisor Gets one or more Advisors for an Azure SQL Server.

Get-AzSqlServerAudit Gets the auditing settings of an Azure SQL server.

Get-AzSqlServerCommunicationLink Gets communication links for elastic database


transactions between database servers.

Get-AzSqlServerConfigurationOption Returns information about server configuration


options for Azure SQL Managed Instance.

Get-AzSqlServerDisasterRecoveryConfiguration Gets a database server system recovery


configuration.

Get-AzSqlServerDisasterRecoveryConfigurationActivity Gets activity for a database server system recovery


configuration.

Get-AzSqlServerDnsAlias Gets or lists Azure SQL Server DNS Alias.

Get-AzSqlServerFirewallRule Gets firewall rules for a SQL Database server.

Get-AzSqlServerIpv6FirewallRule Gets IPv6 firewall rules for a SQL Database server.

Get-AzSqlServerKeyVaultKey Gets a SQL server's Key Vault keys.

Get-AzSqlServerMSSupportAudit Gets the Microsoft support operations auditing


settings of an Azure SQL server.
Get-AzSqlServerOutboundFirewallRule Gets outbound firewall rules (Allowed FQDNs) for a
SQL Database server.

Get-AzSqlServerRecommendedAction Gets one or more recommended actions for an


Azure SQL Server Advisor.

Get-AzSqlServerServiceObjective Gets service objectives for an Azure SQL Database


server.

Get-AzSqlServerTransparentDataEncryptionProtector Gets the Transparent Data Encryption (TDE)


protector

Get-AzSqlServerTrustGroup Gets information about Server Trust Group.

Get-AzSqlServerUpgradeHint Gets pricing tier hints for upgrading an Azure SQL


Database server.

Get-AzSqlServerVirtualNetworkRule Gets or lists Azure SQL Server Virtual Network Rule.

Get-AzSqlServerVulnerabilityAssessmentSetting Gets the vulnerability assessment settings of a


server.

Get-AzSqlSyncAgent Returns information about Azure SQL Sync Agents.

Get-AzSqlSyncAgentLinkedDatabase Returns information about SQL Server databases


linked by a sync agent.

Get-AzSqlSyncGroup Returns information about Azure SQL Database


Sync Groups.

Get-AzSqlSyncGroupLog Returns the logs of an Azure SQL Database Sync


Group.

Get-AzSqlSyncMember Returns information about Azure SQL Database


Sync Members.

Get-AzSqlSyncSchema Returns information about the sync schema of a


member database or a hub database.

Get-AzSqlVirtualCluster Returns information about Azure SQL Virtual


Cluster.

Invoke-AzSqlDatabaseFailover Failovers a database.

Invoke- Revalidates Database Encryption Protector AKV key


AzSqlDatabaseTransparentDataEncryptionProtectorRevalidation

Invoke- Reverts Database Encryption Protector AKV key to


AzSqlDatabaseTransparentDataEncryptionProtectorRevert Server level key

Invoke-AzSqlElasticPoolFailover Failovers an elastic pool.

Invoke-AzSqlInstanceFailover Failovers an Azure SQL Managed Instance.

Invoke- Revalidates the Managed Instance Encryption


AzSqlInstanceTransparentDataEncryptionProtectorRevalidation Protector AKV key
Invoke-AzSqlServerExternalGovernanceStatusRefresh Refreshes the value of external governance on the
server.

Invoke- Revalidates the Server Encryption Protector AKV


AzSqlServerTransparentDataEncryptionProtectorRevalidation key

Move-AzSqlInstanceDatabase Move managed database to another managed


instance.

New-AzSqlDatabase Creates a database or an elastic database.

New-AzSqlDatabaseCopy Creates a copy of a SQL Database that uses the


snapshot at the current time.

New-AzSqlDatabaseDataMaskingRule Creates a data masking rule for a database.

New-AzSqlDatabaseExport Exports an Azure SQL Database as a .bacpac file to


a storage account.

New-AzSqlDatabaseFailoverGroup This command creates a new Azure SQL Database


Failover Group.

New-AzSqlDatabaseImport Imports a .bacpac file and create a new database


on the server.

New-AzSqlDatabaseInstanceFailoverGroup This command creates a new Azure SQL Database


Instance Failover Group.

New-AzSqlDatabaseRestorePoint Creates a new restore point from which a SQL


Database can be restored.

New-AzSqlDatabaseSecondary Creates a secondary database for an existing


database and starts data replication.

New-AzSqlElasticJob Creates a new job

New-AzSqlElasticJobAgent Creates a new elastic job agent

New-AzSqlElasticJobCredential Creates a new job credential

New-AzSqlElasticJobTargetGroup Creates a new target group

New-AzSqlElasticPool Creates an elastic database pool for a SQL


Database.

New-AzSqlInstance Creates an Azure SQL Managed Instance.

New-AzSqlInstanceDatabase Creates an Azure SQL Managed Instance database.

New-AzSqlInstanceLink Creates a new instance link.

New-AzSqlInstancePool Creates an Azure SQL Instance pool.

New-AzSqlInstanceServerTrustCertificate Creates a new server trust certificate.

New-AzSqlServer Creates a SQL Database server.


New-AzSqlServerCommunicationLink Creates a communication link for elastic database
transactions between two SQL Database servers.

New-AzSqlServerDisasterRecoveryConfiguration Creates a database server system recovery


configuration.

New-AzSqlServerDnsAlias This command creates a new Azure SQL Server


DNS Alias.

New-AzSqlServerFirewallRule Creates a firewall rule for a SQL Database server.

New-AzSqlServerIpv6FirewallRule Creates an IPv6 firewall rule for a SQL Database


server.

New-AzSqlServerOutboundFirewallRule Adds the allowed FQDN to the list of outbound


firewall rules and creates a new outbound firewall
rule for Azure SQL Database server.

New-AzSqlServerTrustGroup Creates or updates a Server Trust Group.

New-AzSqlServerVirtualNetworkRule Creates an Azure SQL Server Virtual Network Rule.

New-AzSqlSyncAgent Creates an Azure SQL Sync Agent.

New-AzSqlSyncAgentKey Creates an Azure SQL Sync Agent Key.

New-AzSqlSyncGroup Creates an Azure SQL Database Sync Group.

New-AzSqlSyncMember Creates an Azure SQL Database Sync Member.

Remove-AzSqlDatabase Removes an Azure SQL database.

Remove-AzSqlDatabaseAudit Removes the auditing settings of an Azure SQL


database.

Remove-AzSqlDatabaseDataMaskingRule Removes a data masking rule from a database.

Remove-AzSqlDatabaseFailoverGroup Removes an Azure SQL Database Failover Group.

Remove-AzSqlDatabaseFromFailoverGroup Removes one or more databases from an Azure


SQL Database Failover Group.

Remove-AzSqlDatabaseInstanceFailoverGroup Removes an Instance Failover Group.

Remove-AzSqlDatabaseLongTermRetentionBackup Deletes a long term retention backup.

Remove-AzSqlDatabaseRestorePoint Removes given restore point from a SQL Database.

Remove-AzSqlDatabaseSecondary Terminates data replication between a SQL


Database and the specified secondary database.

Remove-AzSqlDatabaseSensitivityClassification Removes the information types and sensitivity


labels of columns in the database.

Remove-AzSqlElasticJob Removes a job

Remove-AzSqlElasticJobAgent Removes the elastic job agent


Remove-AzSqlElasticJobCredential Removes the elastic job credential

Remove-AzSqlElasticJobStep Removes the job step

Remove-AzSqlElasticJobTarget Removes the target from the target group

Remove-AzSqlElasticJobTargetGroup Removes the target group

Remove-AzSqlElasticPool Deletes an elastic database pool.

Remove-AzSqlInstance Removes an Azure SQL Managed Database


Instance.

Remove-AzSqlInstanceActiveDirectoryAdministrator Removes an Azure AD administrator for SQL


Managed Instance.

Remove-AzSqlInstanceDatabase Removes an Azure SQL Managed Instance


database.

Remove-AzSqlInstanceDatabaseLongTermRetentionBackup Deletes a long term retention backup.

Remove-AzSqlInstanceDatabaseSensitivityClassification Removes the information types and sensitivity


labels of columns in the Azure SQL Managed
Instance database.

Remove-AzSqlInstanceKeyVaultKey Removes a Key Vault key from a SQL managed


instance

Remove-AzSqlInstanceLink Removes an instance link.

Remove-AzSqlInstancePool Removes an Azure SQL Instance pool.

Remove-AzSqlInstanceServerTrustCertificate Removes a server trust certificate.

Remove-AzSqlServer Removes an Azure SQL Database server.

Remove-AzSqlServerActiveDirectoryAdministrator Removes an Azure AD administrator for SQL Server.

Remove-AzSqlServerAudit Removes the auditing settings of an Azure SQL


server.

Remove-AzSqlServerCommunicationLink Deletes a communication link for elastic database


transactions between two servers.

Remove-AzSqlServerDisasterRecoveryConfiguration Removes a SQL database server system recovery


configuration.

Remove-AzSqlServerDnsAlias Removes Azure SQL Server DNS Alias.

Remove-AzSqlServerFirewallRule Deletes a firewall rule from a SQL Database server.

Remove-AzSqlServerIpv6FirewallRule Deletes an IPv6 firewall rule from a SQL Database


server.

Remove-AzSqlServerKeyVaultKey Removes a Key Vault key from a SQL server.

Remove-AzSqlServerMSSupportAudit Removes the Microsoft support operations


auditing settings of an Azure SQL server.
Remove-AzSqlServerOutboundFirewallRule Deletes an allowed FQDN from the list of outbound
firewall rules (Allowed FQDNs) from a SQL
Database server.

Remove-AzSqlServerTrustGroup Deletes a Server Trust Group.

Remove-AzSqlServerVirtualNetworkRule Deletes an Azure SQL Server Virtual Network Rule.

Remove-AzSqlSyncAgent Removes an Azure SQL Sync Agent.

Remove-AzSqlSyncGroup Removes an Azure SQL Database Sync Group.

Remove-AzSqlSyncMember Removes an Azure SQL Database Sync Member.

Remove-AzSqlVirtualCluster Removes an Azure SQL Virtual Cluster.

Restore-AzSqlDatabase Restores a SQL database.

Restore-AzSqlInstanceDatabase Restores an Azure SQL Managed Instance


database.

Resume-AzSqlDatabase Resumes a SQL Data Warehouse database.

Set-AzSqlDatabase Sets properties for a database, or moves an


existing database into an elastic pool.

Set-AzSqlDatabaseAdvisorAutoExecuteStatus Modifies auto execute status of an Azure SQL


Database Advisor.

Set-AzSqlDatabaseAudit Changes the auditing settings for an Azure SQL


Database.

Set-AzSqlDatabaseBackupLongTermRetentionPolicy Sets a server long term retention policy.

Set-AzSqlDatabaseBackupShortTermRetentionPolicy Sets a backup short term retention policy.

Set-AzSqlDatabaseDataMaskingPolicy Sets data masking for a database.

Set-AzSqlDatabaseDataMaskingRule Sets the properties of a data masking rule for a


database.

Set-AzSqlDatabaseFailoverGroup Modifies the configuration of an Azure SQL


Database Failover Group.

Set-AzSqlDatabaseGeoBackupPolicy Sets a database geo backup policy.

Set-AzSqlDatabaseInstanceFailoverGroup Modifies the configuration of an Instance Failover


Group.

Set-AzSqlDatabaseRecommendedActionState Updates the state of an Azure SQL Database


recommended action.

Set-AzSqlDatabaseSecondary Switches a secondary database to be primary in


order to initiate failover.

Set-AzSqlDatabaseSensitivityClassification Sets the information types and sensitivity labels of


columns in the database.
Set-AzSqlDatabaseTransparentDataEncryption Modifies TDE property for a database.

Set-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Sets the vulnerability assessment rule baseline.

Set-AzSqlElasticJob Updates a job

Set-AzSqlElasticJobAgent Updates an elastic job agent

Set-AzSqlElasticJobCredential Updates a job credential

Set-AzSqlElasticJobStep Updates a job step

Set-AzSqlElasticPool Modifies properties of an elastic database pool in


Azure SQL Database.

Set-AzSqlElasticPoolAdvisorAutoExecuteStatus Updates auto execute status of an Azure SQL


Elastic Pool Advisor.

Set-AzSqlElasticPoolRecommendedActionState Updates the state of an Azure SQL Elastic Pool


recommended action.

Set-AzSqlInstance Sets properties for an Azure SQL Managed


Instance.

Set-AzSqlInstanceActiveDirectoryAdministrator Provisions an Azure AD administrator for SQL


Managed Instance.

Set-AzSqlInstanceDatabase Updated an Azure SQL Managed Instance


database.

Set-AzSqlInstanceDatabaseBackupLongTermRetentionPolicy The Set-


AzSqlInstanceDatabaseLongTermRetentionBackup
cmdlet sets a managed database's long term
retention policy.

Set-AzSqlInstanceDatabaseBackupShortTermRetentionPolicy Sets a backup short term retention policy.

Set-AzSqlInstanceDatabaseSensitivityClassification Sets the information types and sensitivity labels of


columns in the Azure SQL Managed Instance
database.

Set- Sets the vulnerability assessment rule baseline.


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline

Set-AzSqlInstanceDtc Sets properties for an Azure SQL Managed Instance


DTC

Set-AzSqlInstancePool Sets properties for an Azure SQL Instance pool.

Set-AzSqlInstanceTransparentDataEncryptionProtector Sets the Transparent Data Encryption (TDE)


protector for a SQL managed instance.

Set-AzSqlServer Modifies properties of a SQL Database server.

Set-AzSqlServerActiveDirectoryAdministrator Provisions an Azure AD administrator for SQL


Server.
Set-AzSqlServerAdvisorAutoExecuteStatus Updates the auto execute status of an Azure SQL
Server Advisor.

Set-AzSqlServerAudit Changes the auditing settings of an Azure SQL


server.

Set-AzSqlServerConfigurationOption Sets the value for a server configuration option on


Azure SQL Managed Instance.

Set-AzSqlServerDisasterRecoveryConfiguration Modifies a database server recovery configuration.

Set-AzSqlServerDnsAlias Modifies the server to which Azure SQL Server DNS


Alias is pointing

Set-AzSqlServerFirewallRule Modifies a firewall rule in Azure SQL Database


server.

Set-AzSqlServerIpv6FirewallRule Modifies an IPv6 firewall rule in Azure SQL


Database server.

Set-AzSqlServerMSSupportAudit Changes the Microsoft support operations auditing


settings of an Azure SQL server.

Set-AzSqlServerRecommendedActionState Updates the state of an Azure SQL Server


recommended action.

Set-AzSqlServerTransparentDataEncryptionProtector Sets the Transparent Data Encryption (TDE)


protector for a SQL server.

Set-AzSqlServerVirtualNetworkRule Modifies the configuration of an Azure SQL Server


Virtual Network Rule.

Start-AzSqlDatabaseExecuteIndexRecommendation Starts the workflow that runs a recommended


index operation.

Start-AzSqlDatabaseVulnerabilityAssessmentScan Starts a vulnerability assessment scan.

Start-AzSqlElasticJob Starts a job, returning a job execution id that can


be polled to view it's status

Start-AzSqlInstanceDatabaseLogReplay Starts a Log Replay service with the given


parameters.

Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan Starts a vulnerability assessment scan.

Start-AzSqlSyncGroupSync Starts a sync group synchronization.

Stop-AzSqlDatabaseActivity Cancels the asynchronous updates operation on


the database.

Stop-AzSqlDatabaseExecuteIndexRecommendation Stops the workflow that runs a recommended


index operation.

Stop-AzSqlElasticJob Stops a job given it's job execution id

Stop-AzSqlElasticPoolActivity Cancels the asynchronous update operation on an


elastic pool.
Stop-AzSqlInstanceDatabaseCopy Stop copy operation of a managed database.

Stop-AzSqlInstanceDatabaseLogReplay Cancels the Log Replay service by dropping the


database.

Stop-AzSqlInstanceDatabaseMove Stop move operation of a managed database.

Stop-AzSqlInstanceOperation Stops a SQL managed instance's operations.

Stop-AzSqlSyncGroupSync Stops a sync group synchronization.

Suspend-AzSqlDatabase Suspends a SQL Data Warehouse database.

Switch-AzSqlDatabaseFailoverGroup Executes a failover of an Azure SQL Database


Failover Group.

Switch-AzSqlDatabaseInstanceFailoverGroup Executes a failover of an Instance Failover Group.

Update-AzSqlDatabaseAdvancedThreatProtectionSetting Sets the Advanced Threat Protection settings on a


database.

Update-AzSqlDatabaseLongTermRetentionBackup Updates a long term retention backup.

Update-AzSqlDatabaseVulnerabilityAssessmentSetting Updates the vulnerability assessment settings of a


database.

Update-AzSqlInstanceAdvancedThreatProtectionSetting Sets the Advanced Threat Protection settings on a


managed instance.

Update- Sets the Advanced Threat Protection settings on a


AzSqlInstanceDatabaseAdvancedThreatProtectionSetting managed database.

Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting Updates the vulnerability assessment settings of a


managed database.

Update-AzSqlInstanceLink Updates the properties of an instance link.

Update-AzSqlInstanceVulnerabilityAssessmentSetting Updates the vulnerability assessment settings of a


managed instance.

Update-AzSqlServerAdvancedThreatProtectionSetting Sets the Advanced Threat Protection settings on a


server.

Update-AzSqlServerVulnerabilityAssessmentSetting Updates the vulnerability assessment settings of a


server.

Update-AzSqlSyncGroup Updates an Azure SQL Database Sync Group.

Update-AzSqlSyncMember Updates an Azure SQL Database Sync Member.

Update-AzSqlSyncSchema Update the sync schema for a sync member


database or a sync hub database. It will get the
latest database schema from the real database and
then use it refresh the schema cached by Sync
metadata database. If "SyncMemberName" is
specified, it will refresh the member database
schema; if not, it will refresh the hub database
schema.
Microsoft.Azure.Management.Sql.
Models Namespace
Reference

) Important

Some information relates to prerelease product that may be substantially modified


before it’s released. Microsoft makes no warranties, express or implied, with respect
to the information provided here.

Classes
AdministratorType Defines values for AdministratorType.

Advisor Database, Server or Elastic Pool Advisor.

AggregationFunctionType Defines values for AggregationFunctionType.

AutomaticTuningOptions Automatic tuning properties for individual advisors.

AutomaticTuningServer Automatic tuning properties for individual advisors.


Options

AutoPauseDelayTimeRange Supported auto pause delay time range

BackupShortTermRetention A short term retention policy.


Policy

BackupStorageRedundancy Defines values for BackupStorageRedundancy.

CapabilityGroup Defines values for CapabilityGroup.

CatalogCollationType Defines values for CatalogCollationType.

CheckNameAvailability A request to check whether the specified name for a resource is


Request available.

CheckNameAvailability The result of a name availability check.


Response

ColumnDataType Defines values for ColumnDataType.

CompleteDatabaseRestore Contains the information necessary to perform a complete


Definition database restore operation.
CopyLongTermRetention Contains the information necessary to perform long term
BackupParameters retention backup copy operation.

CreateDatabaseRestorePoint Contains the information necessary to perform a create database


Definition restore point operation.

CreatedByType Defines values for CreatedByType.

CreateMode Defines values for CreateMode.

Database A database resource.

DatabaseAdvancedThreat A database Advanced Threat Protection.


Protection

DatabaseAutomaticTuning Database-level Automatic Tuning.

DatabaseBlobAuditingPolicy A database blob auditing policy.

DatabaseColumn A database column resource.

DatabaseExtensions An export managed database operation result resource.

DatabaseIdentity Azure Active Directory identity configuration for a resource.

DatabaseIdentityType Defines values for DatabaseIdentityType.

DatabaseLicenseType Defines values for DatabaseLicenseType.

DatabaseOperation A database operation.

DatabaseReadScale Defines values for DatabaseReadScale.

DatabaseSchema A database schema resource.

DatabaseSecurityAlertPolicy A database security alert policy.

DatabaseState Defines values for DatabaseState.

DatabaseStatus Defines values for DatabaseStatus.

DatabaseTable A database table resource.

DatabaseUpdate A database update resource.

DatabaseUsage Usage metric of a database.

DatabaseUserIdentity Azure Active Directory identity configuration for a resource.

DatabaseVulnerability A database vulnerability assessment.


Assessment

DatabaseVulnerability A database vulnerability assessment rule baseline.


AssessmentRuleBaseline

DatabaseVulnerability Properties for an Azure SQL Database Vulnerability Assessment


AssessmentRuleBaselineItem rule baseline's result.

DatabaseVulnerability A database Vulnerability Assessment scan export resource.


AssessmentScansExport

DataMaskingPolicy Represents a database data masking policy.

DataMaskingRule Represents a database data masking rule.

DataWarehouseUserActivities User activities of a data warehouse

DayOfWeek Defines values for DayOfWeek.

DeletedServer A deleted server.

DistributedAvailabilityGroup Distributed availability group between box and Sql Managed


Instance.

EditionCapability The edition capability.

ElasticPool An elastic pool.

ElasticPoolActivity Represents the activity on an elastic pool.

ElasticPoolDatabaseActivity Represents the activity on an elastic pool.

ElasticPoolEditionCapability The elastic pool edition capability.

ElasticPoolLicenseType Defines values for ElasticPoolLicenseType.

ElasticPoolOperation A elastic pool operation.

ElasticPoolPerDatabaseMax The max per-database performance level capability.


PerformanceLevelCapability

ElasticPoolPerDatabaseMin The minimum per-database performance level capability.


PerformanceLevelCapability

ElasticPoolPerDatabase Per database settings of an elastic pool.


Settings

ElasticPoolPerformanceLevel The Elastic Pool performance level capability.


Capability

ElasticPoolState Defines values for ElasticPoolState.

ElasticPoolUpdate An elastic pool update.

EncryptionProtector The server encryption protector.


EndpointCertificate Certificate used on an endpoint on the Managed Instance.

ExportDatabaseDefinition Contains the information necessary to perform export database


operation.

ExtendedDatabaseBlob An extended database blob auditing policy.


AuditingPolicy

ExtendedServerBlobAuditing An extended server blob auditing policy.


Policy

FailoverGroup A failover group.

FailoverGroupReadOnly Read-only endpoint of the failover group instance.


Endpoint

FailoverGroupReadWrite Read-write endpoint of the failover group instance.


Endpoint

FailoverGroupReplicationRole Defines values for FailoverGroupReplicationRole.

FailoverGroupUpdate A failover group update request.

FirewallRule A server firewall rule.

FirewallRuleList A list of server firewall rules.

GeoBackupPolicy A database geo backup policy.

IdentityType Defines values for IdentityType.

ImportExistingDatabase Contains the information necessary to perform import operation


Definition for existing database.

ImportExportExtensions An Extension operation result resource.


OperationResult

ImportExportOperationResult An ImportExport operation result resource.

ImportNewDatabaseDefinition Contains the information necessary to perform import operation


for new database.

InstanceFailoverGroup An instance failover group.

InstanceFailoverGroupRead Read-only endpoint of the failover group instance.


OnlyEndpoint

InstanceFailoverGroupRead Read-write endpoint of the failover group instance.


WriteEndpoint

InstanceFailoverGroup Defines values for InstanceFailoverGroupReplicationRole.


ReplicationRole
InstancePool An Azure SQL instance pool.

InstancePoolEditionCapability The instance pool capability

InstancePoolFamilyCapability The instance pool family capability.

InstancePoolLicenseType Defines values for InstancePoolLicenseType.

InstancePoolUpdate An update to an Instance pool.

InstancePoolVcoresCapability The managed instance virtual cores capability.

IPv6FirewallRule An IPv6 server firewall rule.

Job A job.

JobAgent An Azure SQL job agent.

JobAgentState Defines values for JobAgentState.

JobAgentUpdate An update to an Azure SQL job agent.

JobCredential A stored credential that can be used by a job to connect to


target databases.

JobExecution An execution of a job

JobExecutionLifecycle Defines values for JobExecutionLifecycle.

JobExecutionTarget The target that a job execution is executed on.

JobSchedule Scheduling properties of a job.

JobStep A job step.

JobStepAction The action to be executed by a job step.

JobStepActionSource Defines values for JobStepActionSource.

JobStepActionType Defines values for JobStepActionType.

JobStepExecutionOptions The execution options of a job step.

JobStepOutput The output configuration of a job step.

JobStepOutputType Defines values for JobStepOutputType.

JobTarget A job target, for example a specific database or a container of


databases that is evaluated during job execution.

JobTargetGroup A group of job targets.

JobTargetType Defines values for JobTargetType.


JobVersion A job version.

LedgerDigestUploads Azure SQL Database ledger digest upload settings.

LicenseTypeCapability The license type capability

LocationCapabilities The location capability.

LogicalDatabaseTransparent A logical database transparent data encryption state.


DataEncryption

LogSizeCapability The log size capability.

LogSizeUnit Defines values for LogSizeUnit.

LongTermRetentionBackup A long term retention backup.

LongTermRetentionBackup A LongTermRetentionBackup operation result resource.


OperationResult

LongTermRetentionPolicy A long term retention policy.

MaintenanceConfiguration The maintenance configuration capability


Capability

MaintenanceWindowOptions Maintenance window options.

MaintenanceWindows Maintenance windows.

MaintenanceWindowTime Maintenance window time range.


Range

ManagedBackupShortTerm A short term retention policy.


RetentionPolicy

ManagedDatabase A managed database resource.

ManagedDatabaseCreate Defines values for ManagedDatabaseCreateMode.


Mode

ManagedDatabaseRestore A managed database restore details.


DetailsResult

ManagedDatabaseSecurity A managed database security alert policy.


AlertPolicy

ManagedDatabaseStatus Defines values for ManagedDatabaseStatus.

ManagedDatabaseUpdate An managed database update.

ManagedInstance An Azure SQL managed instance.


ManagedInstance An Azure SQL managed instance administrator.
Administrator

ManagedInstanceAzure Azure Active Directory only authentication.


ADOnlyAuthentication

ManagedInstanceEdition The managed server capability


Capability

ManagedInstanceEncryption The managed instance encryption protector.


Protector

ManagedInstanceExternal Properties of a active directory administrator.


Administrator

ManagedInstanceFamily The managed server family capability.


Capability

ManagedInstanceKey A managed instance key.

ManagedInstanceLicenseType Defines values for ManagedInstanceLicenseType.

ManagedInstanceLongTerm A long term retention backup for a managed database.


RetentionBackup

ManagedInstanceLongTerm A long term retention policy.


RetentionPolicy

ManagedInstance The maintenance configuration capability


MaintenanceConfiguration
Capability

ManagedInstanceOperation A managed instance operation.

ManagedInstanceOperation The parameters of a managed instance operation.


ParametersPair

ManagedInstanceOperation The steps of a managed instance operation.


Steps

ManagedInstancePairInfo Pairs of Managed Instances in the failover group.

ManagedInstancePecProperty A private endpoint connection under a managed instance

ManagedInstancePrivate A private endpoint connection


EndpointConnection

ManagedInstancePrivate Properties of a private endpoint connection.


EndpointConnection
Properties

ManagedInstancePrivateEndpointProperty
ManagedInstancePrivateLink A private link resource

ManagedInstancePrivateLink Properties of a private link resource.


Properties

ManagedInstancePrivateLinkServiceConnectionStateProperty

ManagedInstanceProxy Defines values for ManagedInstanceProxyOverride.


Override

ManagedInstanceQuery Database query.

ManagedInstanceUpdate An update request for an Azure SQL Database managed


instance.

ManagedInstanceVcores The managed instance virtual cores capability.


Capability

ManagedInstanceVersion The managed instance capability


Capability

ManagedInstanceVulnerability A managed instance vulnerability assessment.


Assessment

ManagedServerCreateMode Defines values for ManagedServerCreateMode.

ManagedServerDnsAlias A managed server DNS alias.

ManagedServerDnsAlias A managed server DNS alias acquisition request.


Acquisition

ManagedServerDnsAlias A managed server dns alias creation request.


Creation

ManagedServerSecurityAlert A managed server security alert policy.


Policy

ManagedTransparentData A managed database transparent data encryption state.


Encryption

ManagementOperationState Defines values for ManagementOperationState.

MaxSizeCapability The maximum size capability.

MaxSizeRangeCapability The maximum size range capability.

MaxSizeUnit Defines values for MaxSizeUnit.

Metric Database metrics.

MetricAvailability A metric availability value.

MetricDefinition A database metric definition.


MetricName A database metric name.

MetricType Defines values for MetricType.

MetricValue Represents database metrics.

MinCapacityCapability The min capacity capability

Name ARM Usage Name

NetworkIsolationSettings Contains the ARM resources for which to create private endpoint
connection.

Operation SQL REST API operation definition.

OperationDisplay Display metadata associated with the operation.

OperationImpact The impact of an operation, both in absolute and relative terms.

OperationOrigin Defines values for OperationOrigin.

OutboundFirewallRule An Azure SQL DB Server Outbound Firewall Rule.

Page<T> Defines a page in Azure responses.

Page1<T> Defines a page in Azure responses.

PartnerInfo Partner server information for the failover group.

PartnerRegionInfo Partner region information for the failover group.

PauseDelayTimeUnit Defines values for PauseDelayTimeUnit.

PerformanceLevelCapability The performance level capability.

PerformanceLevelUnit Defines values for PerformanceLevelUnit.

PrimaryAggregationType Defines values for PrimaryAggregationType.

PrincipalType Defines values for PrincipalType.

PrivateEndpointConnection A private endpoint connection

PrivateEndpointConnection Properties of a private endpoint connection.


Properties

PrivateEndpointConnection Contains the private endpoint connection requests status.


RequestStatus

PrivateEndpointProperty

PrivateEndpointProvisioning Defines values for PrivateEndpointProvisioningState.


State
PrivateLinkResource A private link resource

PrivateLinkResourceProperties Properties of a private link resource.

PrivateLinkServiceConnection Defines values for


StateActionsRequire PrivateLinkServiceConnectionStateActionsRequire.

PrivateLinkServiceConnectionStateProperty

PrivateLinkServiceConnection Defines values for PrivateLinkServiceConnectionStateStatus.


StateStatus

ProvisioningState Defines values for ProvisioningState.

ProxyResource ARM proxy resource.

ProxyResourceWithWritable ARM proxy resource.


Name

QueryMetricInterval Properties of a query metrics interval.

QueryMetricProperties Properties of a topquery metric in one interval.

QueryMetricUnitType Defines values for QueryMetricUnitType.

QueryStatistics

QueryStatisticsProperties Properties of a query execution statistics.

QueryTimeGrainType Defines values for QueryTimeGrainType.

ReadOnlyEndpointFailover Defines values for ReadOnlyEndpointFailoverPolicy.


Policy

ReadScaleCapability The read scale capability.

ReadWriteEndpointFailover Defines values for ReadWriteEndpointFailoverPolicy.


Policy

RecommendedAction Database, Server or Elastic Pool Recommended Action.

RecommendedActionCurrent Defines values for RecommendedActionCurrentState.


State

RecommendedActionErrorInfo Contains error information for an Azure SQL Database, Server or


Elastic Pool Recommended Action.

RecommendedActionImpact Contains information of estimated or observed impact on


Record various metrics for an Azure SQL Database, Server or Elastic Pool
Recommended Action.

RecommendedAction Contains information for manual implementation for an Azure


ImplementationInfo SQL Database, Server or Elastic Pool Recommended Action.
RecommendedActionMetric Contains time series of various impacted metrics for an Azure
Info SQL Database, Server or Elastic Pool Recommended Action.

RecommendedActionStateInfo Contains information of current state for an Azure SQL Database,


Server or Elastic Pool Recommended Action.

RecommendedSensitivityLabel A recommended sensitivity label update operation.


Update

RecommendedSensitivityLabel A list of recommended sensitivity label update operations.


UpdateList

RecoverableDatabase A recoverable database

RecoverableManaged A recoverable managed database resource.


Database

ReplicationLink A replication link.

ReplicationLinkType Defines values for ReplicationLinkType.

ReplicationMode Defines values for ReplicationMode.

ReplicationState Defines values for ReplicationState.

ReplicaType Defines values for ReplicaType.

Resource ARM resource.

ResourceIdentity Azure Active Directory identity configuration for a resource.

ResourceMoveDefinition Contains the information necessary to perform a resource move


(rename).

ResourceWithWritableName ARM resource.

RestorableDroppedDatabase A restorable dropped database resource.

RestorableDroppedManaged A restorable dropped managed database resource.


Database

RestorePoint Database restore points.

SampleName Defines values for SampleName.

SecondaryType Defines values for SecondaryType.

SecurityEvent A security event.

SecurityEventsFilterParameters The properties that are supported in the $filter operation.

SecurityEventSqlInjection The properties of a security event sql injection additional


AdditionalProperties properties.
SensitivityLabel A sensitivity label.

SensitivityLabelUpdate A sensitivity label update operation.

SensitivityLabelUpdateList A list of sensitivity label update operations.

Server An Azure SQL Database server.

ServerAdvancedThreat A server Advanced Threat Protection.


Protection

ServerAutomaticTuning Server-level Automatic Tuning.

ServerAzureADAdministrator Azure Active Directory administrator.

ServerAzureADOnly Azure Active Directory only authentication.


Authentication

ServerBlobAuditingPolicy A server blob auditing policy.

ServerCommunicationLink Server communication link.

ServerConnectionPolicy A server connection policy

ServerConnectionType Defines values for ServerConnectionType.

ServerDevOpsAuditing A server DevOps auditing settings.


Settings

ServerDnsAlias A server DNS alias.

ServerDnsAliasAcquisition A server dns alias acquisition request.

ServerExternalAdministrator Properties of a active directory administrator.

ServerInfo Server info for the server trust group.

ServerKey A server key.

ServerKeyType Defines values for ServerKeyType.

ServerNetworkAccessFlag Defines values for ServerNetworkAccessFlag.

ServerOperation A server operation.

ServerPrivateEndpoint A private endpoint connection under a server


Connection

ServerSecurityAlertPolicy A server security alert policy.

ServerTrustCertificate Server trust certificate imported from box to enable connection


between box and Sql Managed Instance.
ServerTrustGroup A server trust group.

ServerUpdate An update request for an Azure SQL Database server.

ServerUsage Represents server metrics.

ServerVersionCapability The server capability

ServerVulnerabilityAssessment A server vulnerability assessment.

ServerWorkspaceFeature Defines values for ServerWorkspaceFeature.

ServiceObjective Represents a database service objective.

ServiceObjectiveCapability The service objectives capability.

ServiceObjectiveId Defines values for ServiceObjectiveId.

ServiceObjectiveName Defines values for ServiceObjectiveName.

ServicePrincipal The managed instance's service principal configuration for a


resource.

ServicePrincipalType Defines values for ServicePrincipalType.

Sku An ARM Resource SKU.

SloUsageMetric A Slo Usage Metric.

SqlAgentConfiguration A recoverable managed database resource.

StorageCapability The storage account type capability.

StorageKeyType Defines values for StorageKeyType.

SubscriptionUsage Usage Metric of a Subscription in a Location.

SyncAgent An Azure SQL Database sync agent.

SyncAgentKeyProperties Properties of an Azure SQL Database sync agent key.

SyncAgentLinkedDatabase An Azure SQL Database sync agent linked database.

SyncAgentState Defines values for SyncAgentState.

SyncConflictResolutionPolicy Defines values for SyncConflictResolutionPolicy.

SyncDatabaseIdProperties Properties of the sync database id.

SyncDirection Defines values for SyncDirection.

SyncFullSchemaProperties Properties of the database full schema.

SyncFullSchemaTable Properties of the table in the database full schema.


SyncFullSchemaTableColumn Properties of the column in the table of database full schema.

SyncGroup An Azure SQL Database sync group.

SyncGroupLogProperties Properties of an Azure SQL Database sync group log.

SyncGroupLogType Defines values for SyncGroupLogType.

SyncGroupSchema Properties of sync group schema.

SyncGroupSchemaTable Properties of table in sync group schema.

SyncGroupSchemaTable Properties of column in sync group table.


Column

SyncGroupState Defines values for SyncGroupState.

SyncGroupsType Defines values for SyncGroupsType.

SyncMember An Azure SQL Database sync member.

SyncMemberDbType Defines values for SyncMemberDbType.

SyncMemberState Defines values for SyncMemberState.

SystemData Metadata pertaining to creation and last modification of the


resource.

TableTemporalType Defines values for TableTemporalType.

TdeCertificate A TDE certificate that can be uploaded into a server.

TimeZone Time Zone.

TopQueries

TrackedResource ARM tracked top level resource.

UnitDefinitionType Defines values for UnitDefinitionType.

UnitType Defines values for UnitType.

UpdateLongTermRetention Contains the information necessary to perform long term


BackupParameters retention backup update operation.

UpdateManagedInstanceDns A recoverable managed database resource.


ServersOperation

UpsertManagedServerOperationParameters

UpsertManagedServerOperationStep

Usage ARM usage.


UserIdentity Azure Active Directory identity configuration for a resource.

VirtualCluster An Azure SQL virtual cluster.

VirtualClusterUpdate An update request for an Azure SQL Database virtual cluster.

VirtualNetworkRule A virtual network rule.

VirtualNetworkRuleState Defines values for VirtualNetworkRuleState.

VulnerabilityAssessment Properties of a Vulnerability Assessment recurring scans.


RecurringScansProperties

VulnerabilityAssessmentScan Properties of a vulnerability assessment scan error.


Error

VulnerabilityAssessmentScan A vulnerability assessment scan record.


Record

VulnerabilityAssessmentScan Defines values for VulnerabilityAssessmentScanState.


State

VulnerabilityAssessmentScan Defines values for VulnerabilityAssessmentScanTriggerType.


TriggerType

WorkloadClassifier Workload classifier operations for a data warehouse

WorkloadGroup Workload group operations for a data warehouse

Enums
AdvancedThreatProtection Defines values for AdvancedThreatProtectionState.
State

AdvisorStatus Defines values for AdvisorStatus.

AutoExecuteStatus Defines values for AutoExecuteStatus.

AutoExecuteStatusInherited Defines values for AutoExecuteStatusInheritedFrom.


From

AutomaticTuningDisabled Defines values for AutomaticTuningDisabledReason.


Reason

AutomaticTuningMode Defines values for AutomaticTuningMode.

AutomaticTuningOptionMode Defines values for AutomaticTuningOptionModeActual.


Actual

AutomaticTuningOptionMode Defines values for AutomaticTuningOptionModeDesired.


Desired

AutomaticTuningServerMode Defines values for AutomaticTuningServerMode.

AutomaticTuningServer Defines values for AutomaticTuningServerReason.


Reason

BlobAuditingPolicyState Defines values for BlobAuditingPolicyState.

CapabilityStatus Defines values for CapabilityStatus.

CheckNameAvailabilityReason Defines values for CheckNameAvailabilityReason.

DataMaskingFunction Defines values for DataMaskingFunction.

DataMaskingRuleState Defines values for DataMaskingRuleState.

DataMaskingState Defines values for DataMaskingState.

GeoBackupPolicyState Defines values for GeoBackupPolicyState.

ImplementationMethod Defines values for ImplementationMethod.

IsRetryable Defines values for IsRetryable.

JobScheduleType Defines values for JobScheduleType.

JobTargetGroupMembership Defines values for JobTargetGroupMembershipType.


Type

LedgerDigestUploadsState Defines values for LedgerDigestUploadsState.

RecommendedActionInitiated Defines values for RecommendedActionInitiatedBy.


By

RecommendedSensitivityLabel Defines values for RecommendedSensitivityLabelUpdateKind.


UpdateKind

ReplicationRole Defines values for ReplicationRole.

RestorePointType Defines values for RestorePointType.

SecurityAlertPolicyState Defines values for SecurityAlertPolicyState.

SecurityAlertsPolicyState Defines values for SecurityAlertsPolicyState.

SecurityEventType Defines values for SecurityEventType.

SensitivityLabelRank Defines values for SensitivityLabelRank.

SensitivityLabelSource Defines values for SensitivityLabelSource.

SensitivityLabelUpdateKind Defines values for SensitivityLabelUpdateKind.


TransparentDataEncryption Defines values for TransparentDataEncryptionState.
State

VulnerabilityAssessmentPolicy Defines values for VulnerabilityAssessmentPolicyBaselineName.


BaselineName
com.microsoft.azure.management.sql
Reference
Package: com.microsoft.azure.management.sql
Maven Artifact: com.microsoft.azure:azure-mgmt-sql:1.41.4

This package contains the classes for SqlManagementClient. The Azure SQL Database
management API provides a RESTful set of web services that interact with Azure SQL
Database services to manage your databases. The API enables you to create, retrieve,
update, and delete databases.

Classes
AutomaticTuningOptions Automatic tuning properties for individual advisors.

AutomaticTuningServer Automatic tuning properties for individual advisors.


Options

CatalogCollationType Defines values for CatalogCollationType.

CheckNameAvailability A request to check whether the specified name for a resource is


Request available.

CompleteDatabaseRestore Contains the information necessary to perform a complete


Definition database restore operation.

CreateDatabaseRestorePoint Contains the information necessary to perform a create database


Definition restore point operation.

CreateMode Defines values for CreateMode.

DatabaseEdition Defines values for DatabaseEdition.

DatabaseUpdate Represents a database update.

DatabaseVulnerability Properties for an Azure SQL Database Vulnerability Assessment


AssessmentRuleBaselineItem rule baseline's result.

EditionCapability The database edition capabilities.

ElasticPoolDtuCapability The Elastic Pool DTU capability.

ElasticPoolEdition Defines values for ElasticPoolEdition.

ElasticPoolEditionCapability The elastic pool edition capabilities.

ElasticPoolPerDatabaseMax The max per-database DTU capability.


DtuCapability
ElasticPoolPerDatabaseMinDtu The minimum per-database DTU capability.
Capability

ElasticPoolState Defines values for ElasticPoolState.

ElasticPoolUpdate Represents an elastic pool update.

ExportRequest Export database parameters.

FailoverGroupReadOnly Read-only endpoint of the failover group instance.


Endpoint

FailoverGroupReadWrite Read-write endpoint of the failover group instance.


Endpoint

FailoverGroupReplicationRole Defines values for FailoverGroupReplicationRole.

FailoverGroupUpdate A failover group update request.

IdentityType Defines values for IdentityType.

ImportExtensionRequest Import database parameters.

ImportRequest Import database parameters.

InstanceFailoverGroupRead Read-only endpoint of the failover group instance.


OnlyEndpoint

InstanceFailoverGroupRead Read-write endpoint of the failover group instance.


WriteEndpoint

InstanceFailoverGroup Defines values for InstanceFailoverGroupReplicationRole.


ReplicationRole

InstancePoolLicenseType Defines values for InstancePoolLicenseType.

InstancePoolUpdate An update to an Instance pool.

JobAgentState Defines values for JobAgentState.

JobAgentUpdate An update to an Azure SQL job agent.

JobExecutionLifecycle Defines values for JobExecutionLifecycle.

JobExecutionTarget The target that a job execution is executed on.

JobSchedule Scheduling properties of a job.

JobStepAction The action to be executed by a job step.

JobStepActionSource Defines values for JobStepActionSource.

JobStepActionType Defines values for JobStepActionType.


JobStepExecutionOptions The execution options of a job step.

JobStepOutput The output configuration of a job step.

JobStepOutputType Defines values for JobStepOutputType.

JobTarget A job target, for example a specific database or a container of


databases that is evaluated during job execution.

JobTargetType Defines values for JobTargetType.

ManagedDatabaseCreate Defines values for ManagedDatabaseCreateMode.


Mode

ManagedDatabaseStatus Defines values for ManagedDatabaseStatus.

ManagedDatabaseUpdate An managed database update.

ManagedInstanceLicenseType Defines values for ManagedInstanceLicenseType.

ManagedInstancePairInfo Pairs of Managed Instances in the failover group.

ManagedInstanceProxy Defines values for ManagedInstanceProxyOverride.


Override

ManagedInstanceUpdate An update request for an Azure SQL Database managed


instance.

ManagedServerCreateMode Defines values for ManagedServerCreateMode.

ManagementOperationState Defines values for ManagementOperationState.

MaxSizeCapability The maximum size limits for a database.

MetricAvailability A metric availability value.

MetricName A database metric name.

MetricValue Represents database metrics.

Name ARM Usage Name.

OperationDisplay Display metadata associated with the operation.

OperationImpact The impact of an operation, both in absolute and relative terms.

OperationOrigin Defines values for OperationOrigin.

PartnerInfo Partner server information for the failover group.

PartnerRegionInfo Partner region information for the failover group.

PrimaryAggregationType Defines values for PrimaryAggregationType.


ProvisioningState Defines values for ProvisioningState.

ReadOnlyEndpointFailover Defines values for ReadOnlyEndpointFailoverPolicy.


Policy

ReadWriteEndpointFailover Defines values for ReadWriteEndpointFailoverPolicy.


Policy

RecommendedIndex Represents a database recommended index.

ReplicationState Defines values for ReplicationState.

ResourceIdentity Azure Active Directory identity configuration for a resource.

ResourceMoveDefinition Contains the information necessary to perform a resource move


(rename).

SampleName Defines values for SampleName.

ServerDnsAliasAcquisition A server DNS alias acquisition request.

ServerKeyType Defines values for ServerKeyType.

ServerUpdate An update request for an Azure SQL Database server.

ServerVersionCapability The server capabilities.

ServiceObjectiveCapability The service objectives capability.

ServiceObjectiveName Defines values for ServiceObjectiveName.

Sku The resource model definition representing SKU.

SloUsageMetric A Slo Usage Metric.

SqlDatabasePremiumService The name of the configured Service Level Objective of a


Objective "Premium" Azure SQL Database.

SqlDatabaseStandardService The name of the configured Service Level Objective of a


Objective "Standard" Azure SQL Database.

SyncAgentState Defines values for SyncAgentState.

SyncConflictResolutionPolicy Defines values for SyncConflictResolutionPolicy.

SyncDirection Defines values for SyncDirection.

SyncFullSchemaTable Properties of the table in the database full schema.

SyncFullSchemaTableColumn Properties of the column in the table of database full schema.

SyncGroupLogType Defines values for SyncGroupLogType.


SyncGroupSchema Properties of sync group schema.

SyncGroupSchemaTable Properties of table in sync group schema.

SyncGroupSchemaTable Properties of column in sync group table.


Column

SyncGroupState Defines values for SyncGroupState.

SyncMemberDbType Defines values for SyncMemberDbType.

SyncMemberState Defines values for SyncMemberState.

TransparentDataEncryption Defines values for TransparentDataEncryptionActivityStatus.


ActivityStatus

UnitDefinitionType Defines values for UnitDefinitionType.

UnitType Defines values for UnitType.

VirtualClusterUpdate An update request for an Azure SQL Database virtual cluster.

VirtualNetworkRuleState Defines values for VirtualNetworkRuleState.

VulnerabilityAssessment Properties of a Vulnerability Assessment recurring scans.


RecurringScansProperties

VulnerabilityAssessmentScan Properties of a vulnerability assessment scan error.


Error

VulnerabilityAssessmentScan Defines values for VulnerabilityAssessmentScanState.


State

VulnerabilityAssessmentScan Defines values for VulnerabilityAssessmentScanTriggerType.


TriggerType

Interfaces
CheckNameAvailabilityResult The result of checking for the SQL server name availability.

DatabaseMetric An immutable client-side representation of an Azure SQL


DatabaseMetric.

ElasticPoolActivity An immutable client-side representation of an Azure SQL Elastic


Pool's Activity.

ElasticPoolDatabaseActivity An immutable client-side representation of an Azure SQL Elastic


Pool's Database Activity.

RecommendedElasticPool An immutable client-side representation of an Azure SQL


Recommended ElasticPool.

RecommendedElasticPool An immutable client-side representation of an Azure SQL


Metric Replication link.

RegionCapabilities An immutable client-side representation of an Azure SQL server


capabilities for a given region.

ReplicationLink An immutable client-side representation of an Azure SQL


Replication link.

RestorePoint An immutable client-side representation of an Azure SQL


database's Restore Point.

ServerMetric An immutable client-side representation of an Azure SQL Server


Metric.

ServerUsage An immutable client-side representation of an Azure SQL server


usage metric.

ServiceLevelObjectiveUsage An immutable client-side representation of an Azure SQL


Metric database's service level objective usage metric.

ServiceObjective An immutable client-side representation of an Azure SQL Service


Objective.

ServiceTierAdvisor An immutable client-side representation of an Azure SQL Service


tier advisor.

SloUsageMetricInterface An immutable client-side representation of an Azure SQL


database's SloUsageMetric.

SqlActiveDirectory Response containing the Azure SQL Active Directory


Administrator administrator.

SqlChildrenOperations<T> Base class for Azure SQL Server child resource operations.

SqlChildrenOperations.Sql Base interface for Azure SQL Server child resource actions.
ChildrenActionsDefinition<T>

SqlDatabase An immutable client-side representation of an Azure SQL Server


Database.

SqlDatabase.DefinitionStages Grouping of all the SQL Database definition stages.

SqlDatabase.DefinitionStages. The first stage of the SQL Server Firewall rule definition.
Blank<ParentT>

SqlDatabase.DefinitionStages. The SQL database interface with all starting options for
WithAllDifferent definition.
Options<ParentT>
SqlDatabase.DefinitionStages. The final stage of the SQL Database definition after the SQL
WithAttachAfterElasticPool Elastic Pool definition.
Options<ParentT>

SqlDatabase.DefinitionStages. The final stage of the SQL Database definition with all the other
WithAttachAll options.
Options<ParentT>

SqlDatabase.DefinitionStages. The final stage of the SQL Database definition.


WithAttachFinal<ParentT>

SqlDatabase.DefinitionStages. Sets the authentication type and SQL or Active Directory


WithAuthentication<ParentT> administrator login and password.

SqlDatabase.DefinitionStages. Sets the authentication type and SQL or Active Directory


WithAuthenticationAfterElastic administrator login and password.
Pool<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the collation for database.
WithCollation<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the collation for database.
WithCollationAfterElasticPool
Options<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the create mode for
WithCreateMode<ParentT> database.

SqlDatabase.DefinitionStages. The SQL Database definition to set the edition for database.
WithEdition<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the edition default for
WithEditionDefaults<ParentT> database.

SqlDatabase.DefinitionStages. The SQL Database definition to set the collation for database.
WithEditionDefaults.With
Collation<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the elastic pool for database.
WithElasticPool
Name<ParentT>

SqlDatabase.DefinitionStages. The stage to decide whether using existing database or not.


WithExistingDatabaseAfter
ElasticPool<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to import a BACPAC file as the


WithImportFrom<ParentT> source database.

SqlDatabase.DefinitionStages. The SQL Database definition to import a BACPAC file as the


WithImportFromAfterElastic source database within an elastic pool.
Pool<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the Max Size in Bytes for
WithMaxSizeBytes<ParentT> database.

SqlDatabase.DefinitionStages. The SQL Database definition to set the Max Size in Bytes for
WithMaxSizeBytesAfterElastic database.
PoolOptions<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set a restorable dropped


WithRestorableDropped database as the source database.
Database<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set a restore point as the source
WithRestorePoint database.
Database<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set a restore point as the source
WithRestorePointDatabase database within an elastic pool.
AfterElasticPool<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set a sample database as the


WithSample source database.
Database<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set a sample database as the


WithSampleDatabaseAfter source database within an elastic pool.
ElasticPool<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the service level objective.
WithService
Objective<ParentT>

SqlDatabase.DefinitionStages. The SQL Database definition to set the source database id for
WithSourceDatabase database.
Id<ParentT>

SqlDatabase.DefinitionStages. Sets the storage key type and value to use.


WithStorageKey<ParentT>

SqlDatabase.DefinitionStages. Sets the storage key type and value to use.


WithStorageKeyAfterElastic
Pool<ParentT>

SqlDatabase.SqlDatabase Container interface for all the definitions that need to be


Definition<ParentT> implemented.

SqlDatabase.Update The template for a SqlDatabase update operation, containing all


the settings that can be modified.

SqlDatabase.UpdateStages Grouping of all the SqlDatabase update stages.


SqlDatabase.UpdateStages. The SQL Database definition to set the edition for database.
WithEdition

SqlDatabase.UpdateStages. The SQL Database definition to set the elastic pool for database.
WithElasticPoolName

SqlDatabase.UpdateStages. The SQL Database definition to set the Max Size in Bytes for
WithMaxSizeBytes database.

SqlDatabase.UpdateStages. The SQL Database definition to set the service level objective.
WithServiceObjective

SqlDatabaseAutomaticTuning An immutable client-side representation of an Azure SQL


database automatic tuning object.

SqlDatabaseAutomaticTuning. The template for a SqlDatabaseAutomaticTuning update


Update operation, containing all the settings that can be modified.

SqlDatabaseAutomaticTuning. Grouping of all the SqlDatabaseAutomaticTuning update stages.


UpdateStages

SqlDatabaseAutomaticTuning. The update stage setting the database automatic tuning desired
UpdateStages.WithAutomatic state.
TuningMode

SqlDatabaseAutomaticTuning. The update stage setting the database automatic tuning options.
UpdateStages.WithAutomatic
TuningOptions

SqlDatabaseExportRequest An immutable client-side representation of an Azure SQL


Database export operation request.

SqlDatabaseExportRequest. Grouping of database export definition stages.


DefinitionStages

SqlDatabaseExportRequest. Sets the storage URI to use.


DefinitionStages.ExportTo

SqlDatabaseExportRequest. Sets the authentication type and SQL or Active Directory


DefinitionStages.With administrator login and password.
AuthenticationTypeAndLogin
Password

SqlDatabaseExportRequest. The stage of the definition which contains all the minimum
DefinitionStages.WithExecute required inputs for execution, but also allows for any other
optional settings to be specified.

SqlDatabaseExportRequest. Sets the storage key type and value to use.


DefinitionStages.WithStorage
TypeAndKey
SqlDatabaseExportRequest.Sql The entirety of database export operation definition.
DatabaseExportRequest
Definition

SqlDatabaseImportExport Response containing result of the Azure SQL Database import or


Response export operation.

SqlDatabaseImportRequest An immutable client-side representation of an Azure SQL


Database import operation request.

SqlDatabaseImportRequest. Grouping of database import definition stages.


DefinitionStages

SqlDatabaseImportRequest. Sets the storage URI to use.


DefinitionStages.ImportFrom

SqlDatabaseImportRequest. Sets the authentication type and SQL or Active Directory


DefinitionStages.With administrator login and password.
AuthenticationTypeAndLogin
Password

SqlDatabaseImportRequest. The stage of the definition which contains all the minimum
DefinitionStages.WithExecute required inputs for execution, but also allows for any other
optional settings to be specified.

SqlDatabaseImportRequest. Sets the storage key type and value to use.


DefinitionStages.WithStorage
TypeAndKey

SqlDatabaseImportRequest. The entirety of database import operation definition.


SqlDatabaseImportRequest
Definition

SqlDatabaseMetric Response containing the Azure SQL Database metric.

SqlDatabaseMetricAvailability Response containing the Azure SQL Database metric availability.

SqlDatabaseMetricDefinition Response containing the Azure SQL Database metric definition.

SqlDatabaseMetricValue Response containing the Azure SQL Database metric value.

SqlDatabaseOperations A representation of the Azure SQL Database operations.

SqlDatabaseOperations. Grouping of all the SQL database definition stages.


DefinitionStages

SqlDatabaseOperations. The first stage of the SQL database definition.


DefinitionStages.Blank

SqlDatabaseOperations. The SQL database interface with all starting options for
DefinitionStages.WithAll definition.
DifferentOptions
SqlDatabaseOperations. Sets the authentication type and SQL or Active Directory
DefinitionStages.With administrator login and password.
Authentication

SqlDatabaseOperations. Sets the authentication type and SQL or Active Directory


DefinitionStages.With administrator login and password.
AuthenticationAfterElasticPool

SqlDatabaseOperations. The SQL Database definition to set the collation for database.
DefinitionStages.WithCollation

SqlDatabaseOperations. The SQL Database definition to set the collation for database.
DefinitionStages.WithCollation
AfterElasticPoolOptions

SqlDatabaseOperations. The final stage of the SQL Database definition after the SQL
DefinitionStages.WithCreate Elastic Pool definition.
AfterElasticPoolOptions

SqlDatabaseOperations. A SQL Database definition with sufficient inputs to create a new


DefinitionStages.WithCreateAll SQL database in the cloud, but exposing additional optional
Options settings to specify.

SqlDatabaseOperations. A SQL Database definition with sufficient inputs to create a new


DefinitionStages.WithCreate SQL Server in the cloud, but exposing additional optional inputs
Final to specify.

SqlDatabaseOperations. The SQL Database definition to set the create mode for
DefinitionStages.WithCreate database.
Mode

SqlDatabaseOperations. The SQL Database definition to set the edition for database.
DefinitionStages.WithEdition

SqlDatabaseOperations. The SQL Database definition to set the edition for database with
DefinitionStages.WithEdition defaults.
Defaults

SqlDatabaseOperations. The SQL Database definition to set the collation for database.
DefinitionStages.WithEdition
Defaults.WithCollation

SqlDatabaseOperations. The SQL Database definition to set the elastic pool for database.
DefinitionStages.WithElastic
PoolName

SqlDatabaseOperations. The stage to decide whether using existing database or not.


DefinitionStages.WithExisting
DatabaseAfterElasticPool

SqlDatabaseOperations. The SQL Database definition to import a BACPAC file as the


DefinitionStages.WithImport source database.
From

SqlDatabaseOperations. The SQL Database definition to import a BACPAC file as the


DefinitionStages.WithImport source database.
FromAfterElasticPool

SqlDatabaseOperations. The SQL Database definition to set the Max Size in Bytes for
DefinitionStages.WithMaxSize database.
Bytes

SqlDatabaseOperations. The SQL Database definition to set the Max Size in Bytes for
DefinitionStages.WithMaxSize database.
BytesAfterElasticPoolOptions

SqlDatabaseOperations. The SQL Database definition to set a restorable dropped


DefinitionStages.With database as the source database.
RestorableDroppedDatabase

SqlDatabaseOperations. The SQL Database definition to set a restore point as the source
DefinitionStages.WithRestore database.
PointDatabase

SqlDatabaseOperations. The SQL Database definition to set a restore point as the source
DefinitionStages.WithRestore database within an elastic pool.
PointDatabaseAfterElasticPool

SqlDatabaseOperations. The SQL Database definition to set a sample database as the


DefinitionStages.WithSample source database.
Database

SqlDatabaseOperations. The SQL Database definition to set a sample database as the


DefinitionStages.WithSample source database within an elastic pool.
DatabaseAfterElasticPool

SqlDatabaseOperations. The SQL Database definition to set the service level objective.
DefinitionStages.WithService
Objective

SqlDatabaseOperations. The SQL Database definition to set the source database id for
DefinitionStages.WithSource database.
DatabaseId

SqlDatabaseOperations. The stage of the SQL Database rule definition allowing to specify
DefinitionStages.WithSql the parent resource group, SQL server and location.
Server

SqlDatabaseOperations. Sets the storage key type and value to use.


DefinitionStages.WithStorage
Key

SqlDatabaseOperations. Sets the storage key type and value to use.


DefinitionStages.WithStorage
KeyAfterElasticPool

SqlDatabaseOperations.Sql Grouping of the Azure SQL Database rule common actions.


DatabaseActionsDefinition

SqlDatabaseOperations.Sql Container interface for all the definitions that need to be


DatabaseOperationsDefinition implemented.

SqlDatabaseThreatDetection A representation of the Azure SQL Database threat detection


Policy policy.

SqlDatabaseThreatDetection Grouping of all the SQL database threat detection policy


Policy.DefinitionStages definition stages.

SqlDatabaseThreatDetection The first stage of the SQL database threat detection policy
Policy.DefinitionStages.Blank definition.

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With security alert policy alerts to be disabled.
AlertsFilter

SqlDatabaseThreatDetection The final stage of the SQL database threat detection policy
Policy.DefinitionStages.With definition.
Create

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With security alert policy email addresses.
EmailAddresses

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set that
Policy.DefinitionStages.With the alert is sent to the account administrators.
EmailToAccountAdmins

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With number of days to keep in the Threat Detection audit logs.
RetentionDays

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With state.
SecurityAlertPolicyState

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With storage access key.
StorageAccountAccessKey

SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With storage endpoint.
StorageEndpoint

SqlDatabaseThreatDetection Container interface for all the definitions that need to be


Policy.SqlDatabaseThreat implemented.
DetectionPolicyDefinition
SqlDatabaseThreatDetection Container interface for SQL database threat detection policy
Policy.SqlDatabaseThreat operations.
DetectionPolicyOperations

SqlDatabaseThreatDetection The template for a SQL database threat detection policy update
Policy.Update operation, containing all the settings that can be modified.

SqlDatabaseThreatDetection Grouping of all the SQL database threat detection policy update
Policy.UpdateStages stages.

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set the security alert policy alerts to be disabled.
AlertsFilter

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set the security alert policy email addresses.
EmailAddresses

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set that the alert is sent to the account administrators.
EmailToAccountAdmins

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set the number of days to keep in the Threat Detection audit
RetentionDays logs.

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set the state.
SecurityAlertPolicyState

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set the storage access key.
StorageAccountAccessKey

SqlDatabaseThreatDetection The SQL database threat detection policy update definition to


Policy.UpdateStages.With set the storage endpoint.
StorageEndpoint

SqlDatabaseUsageMetric The result of SQL server usages per SQL Database.

SqlElasticPool An immutable client-side representation of an Azure SQL Elastic


Pool.

SqlElasticPool.DefinitionStages Grouping of all the storage account definition stages.

SqlElasticPool.Definition The first stage of the SQL Server definition.


Stages.Blank<ParentT>

SqlElasticPool.Definition The final stage of the SQL Elastic Pool definition.


Stages.WithAttach<ParentT>
SqlElasticPool.Definition The SQL Elastic Pool definition to set the eDTU and storage
Stages.WithBasic capacity limits for a basic pool.
Edition<ParentT>

SqlElasticPool.Definition The SQL Elastic Pool definition to set the maximum DTU for one
Stages.WithDatabaseDtu database.
Max<ParentT>

SqlElasticPool.Definition The SQL Elastic Pool definition to set the minimum DTU for
Stages.WithDatabaseDtu database.
Min<ParentT>

SqlElasticPool.Definition The SQL Elastic Pool definition to set the number of shared DTU
Stages.WithDtu<ParentT> for elastic pool.

SqlElasticPool.Definition The SQL Elastic Pool definition to set the edition for database.
Stages.WithEdition<ParentT>

SqlElasticPool.Definition The SQL Elastic Pool definition to set the eDTU and storage
Stages.WithPremium capacity limits for a premium pool.
Edition<ParentT>

SqlElasticPool.Definition The SQL Elastic Pool definition to set the eDTU and storage
Stages.WithStandard capacity limits for a standard pool.
Edition<ParentT>

SqlElasticPool.Definition The SQL Elastic Pool definition to set the storage limit for the
Stages.WithStorage SQL Azure Database Elastic Pool in MB.
Capacity<ParentT>

SqlElasticPool.SqlElasticPool Container interface for all the definitions that need to be


Definition<ParentT> implemented.

SqlElasticPool.Update The template for a SQL Elastic Pool update operation, containing
all the settings that can be modified.

SqlElasticPool.UpdateStages Grouping of all the SQL Elastic Pool update stages.

SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to add the Database in the elastic
WithDatabase pool.

SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the maximum DTU for one
WithDatabaseDtuMax database.

SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the minimum DTU for
WithDatabaseDtuMin database.

SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the number of shared DTU
WithDtu for elastic pool.

SqlElasticPool.UpdateStages. The SQL Elastic Pool update definition to set the eDTU and
WithReservedDTUAndStorage storage capacity limits.
Capacity

SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the storage limit for the
WithStorageCapacity SQL Azure Database Elastic Pool in MB.

SqlElasticPoolOperations A representation of the Azure SQL Elastic Pool operations.

SqlElasticPoolOperations. Grouping of all the SQL Elastic Pool definition stages.


DefinitionStages

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the eDTU and storage
DefinitionStages.WithBasic capacity limits for a basic pool.
Edition

SqlElasticPoolOperations. A SQL Server definition with sufficient inputs to create a new


DefinitionStages.WithCreate SQL Elastic Pool in the cloud, but exposing additional optional
inputs to specify.

SqlElasticPoolOperations. The SQL Elastic Pool definition to add the Database in the Elastic
DefinitionStages.With Pool.
Database

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the maximum DTU for one
DefinitionStages.With database.
DatabaseDtuMax

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the minimum DTU for
DefinitionStages.With database.
DatabaseDtuMin

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the number of shared DTU
DefinitionStages.WithDtu for elastic pool.

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the edition type.
DefinitionStages.WithEdition

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the eDTU and storage
DefinitionStages.WithPremium capacity limits for a premium pool.
Edition

SqlElasticPoolOperations. The first stage of the SQL Server Elastic Pool definition.
DefinitionStages.WithSql
Server

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the eDTU and storage
DefinitionStages.WithStandard capacity limits for a standard pool.
Edition

SqlElasticPoolOperations. The SQL Elastic Pool definition to set the storage limit for the
DefinitionStages.WithStorage SQL Azure Database Elastic Pool in MB.
Capacity
SqlElasticPoolOperations.Sql Grouping of the Azure SQL Elastic Pool common actions.
ElasticPoolActionsDefinition

SqlElasticPoolOperations.Sql Container interface for all the definitions that need to be


ElasticPoolOperations implemented.
Definition

SqlEncryptionProtector An immutable client-side representation of an Azure SQL


Encryption Protector.

SqlEncryptionProtector. The template for a SQL Encryption Protector update operation,


Update containing all the settings that can be modified.

SqlEncryptionProtector. Grouping of all the SQL Encryption Protector update stages.


UpdateStages

SqlEncryptionProtector. The SQL Encryption Protector update definition to set the server
UpdateStages.WithServerKey key name and type.
NameAndType

SqlEncryptionProtector A representation of the Azure SQL Encryption Protector


Operations operations.

SqlEncryptionProtector Grouping of the Azure SQL Server Key common actions.


Operations.SqlEncryption
ProtectorActionsDefinition

SqlFailoverGroup An immutable client-side representation of an Azure SQL


Failover Group.

SqlFailoverGroup.Update The template for a SQL Failover Group update operation,


containing all the settings that can be modified.

SqlFailoverGroup.Update Grouping of all the SQL Virtual Network Rule update stages.
Stages

SqlFailoverGroup.Update The SQL Failover Group update definition to set the partner
Stages.WithDatabase servers.

SqlFailoverGroup.Update The SQL Failover Group update definition to set the failover
Stages.WithReadOnlyEndpoint policy of the read-only endpoint.
Policy

SqlFailoverGroup.Update The SQL Failover Group update definition to set the read-write
Stages.WithReadWrite endpoint failover policy.
EndpointPolicy

SqlFailoverGroupOperations A representation of the Azure SQL Failover Group operations.

SqlFailoverGroupOperations. Grouping of all the SQL Failover Group definition stages.


DefinitionStages
SqlFailoverGroupOperations. The final stage of the SQL Failover Group definition.
DefinitionStages.WithCreate

SqlFailoverGroupOperations. The SQL Failover Group definition to set the partner servers.
DefinitionStages.With
Database

SqlFailoverGroupOperations. The SQL Failover Group definition to set the partner servers.
DefinitionStages.WithPartner
Server

SqlFailoverGroupOperations. The SQL Failover Group definition to set the failover policy of the
DefinitionStages.WithRead read-only endpoint.
OnlyEndpointPolicy

SqlFailoverGroupOperations. The SQL Failover Group definition to set the read-write endpoint
DefinitionStages.WithRead failover policy.
WriteEndpointPolicy

SqlFailoverGroupOperations. The first stage of the SQL Failover Group definition.


DefinitionStages.WithSql
Server

SqlFailoverGroupOperations. Grouping of the Azure SQL Failover Group common actions.


SqlFailoverGroupActions
Definition

SqlFailoverGroupOperations. Container interface for all the definitions that need to be


SqlFailoverGroupOperations implemented.
Definition

SqlFirewallRule An immutable client-side representation of an Azure SQL Server


Firewall Rule.

SqlFirewallRule.Definition Grouping of all the SQL Firewall Rule definition stages.


Stages

SqlFirewallRule.Definition The first stage of the SQL Server Firewall Rule definition.
Stages.Blank<ParentT>

SqlFirewallRule.Definition The final stage of the SQL Firewall Rule definition.


Stages.WithAttach<ParentT>

SqlFirewallRule.Definition The SQL Firewall Rule definition to set the IP address for the
Stages.With parent SQL Server.
IPAddress<ParentT>

SqlFirewallRule.Definition The SQL Firewall Rule definition to set the IP address range for
Stages.WithIPAddress the parent SQL Server.
Range<ParentT>
SqlFirewallRule.SqlFirewallRule Container interface for all the definitions that need to be
Definition<ParentT> implemented.

SqlFirewallRule.Update The template for a SQL Firewall Rule update operation,


containing all the settings that can be modified.

SqlFirewallRule.UpdateStages Grouping of all the SQL Firewall Rule update stages.

SqlFirewallRule.UpdateStages. The SQL Firewall Rule definition to set the starting IP Address for
WithEndIPAddress the server.

SqlFirewallRule.UpdateStages. The SQL Firewall Rule definition to set the starting IP Address for
WithStartIPAddress the server.

SqlFirewallRuleOperations A representation of the Azure SQL Firewall rule operations.

SqlFirewallRuleOperations. Grouping of all the SQL Firewall rule definition stages.


DefinitionStages

SqlFirewallRuleOperations. The final stage of the SQL Firewall Rule definition.


DefinitionStages.WithCreate

SqlFirewallRuleOperations. The SQL Firewall Rule definition to set the IP address range for
DefinitionStages.With the parent SQL Server.
IPAddressRange

SqlFirewallRuleOperations. The first stage of the SQL Server Firewall rule definition.
DefinitionStages.WithSql
Server

SqlFirewallRuleOperations.Sql Grouping of the Azure SQL Server Firewall Rule common actions.
FirewallRuleActionsDefinition

SqlFirewallRuleOperations.Sql Container interface for all the definitions that need to be


FirewallRuleOperations implemented.
Definition

SqlRestorableDropped Response containing Azure SQL restorable dropped database.


Database

SqlServer An immutable client-side representation of an Azure SQL Server.

SqlServer.Definition Container interface for all the definitions that need to be


implemented.

SqlServer.DefinitionStages Grouping of all the storage account definition stages.

SqlServer.DefinitionStages. The first stage of the SQL Server definition.


Blank

SqlServer.DefinitionStages. A SQL Server definition setting the Active Directory


WithActiveDirectory administrator.
Administrator

SqlServer.DefinitionStages. A SQL Server definition setting administrator user name.


WithAdministratorLogin

SqlServer.DefinitionStages. A SQL Server definition setting admin user password.


WithAdministratorPassword

SqlServer.DefinitionStages. A SQL Server definition with sufficient inputs to create a new


WithCreate SQL Server in the cloud, but exposing additional optional inputs
to specify.

SqlServer.DefinitionStages. A SQL Server definition for specifying the databases.


WithDatabase

SqlServer.DefinitionStages. A SQL Server definition for specifying elastic pool.


WithElasticPool

SqlServer.DefinitionStages. The stage of the SQL Server definition allowing to specify the
WithFirewallRule SQL Firewall rules.

SqlServer.DefinitionStages. A SQL Server definition allowing resource group to be set.


WithGroup

SqlServer.DefinitionStages. A SQL Server definition setting the managed service identity.


WithSystemAssignedManaged
ServiceIdentity

SqlServer.DefinitionStages. The stage of the SQL Server definition allowing to specify the
WithVirtualNetworkRule SQL Virtual Network Rules.

SqlServer.Update The template for a SQLServer update operation, containing all


the settings that can be modified.

SqlServer.UpdateStages Grouping of all the SQLServer update stages.

SqlServer.UpdateStages.With A SQL Server update stage setting admin user password.


AdministratorPassword

SqlServer.UpdateStages.With A SQL Server definition for specifying the databases.


Database

SqlServer.UpdateStages.With A SQL Server definition for specifying elastic pool.


ElasticPool

SqlServer.UpdateStages.With The stage of the SQL Server update definition allowing to specify
FirewallRule the SQL Firewall rules.

SqlServer.UpdateStages.With A SQL Server definition setting the managed service identity.


SystemAssignedManaged
ServiceIdentity
SqlServerAutomaticTuning An immutable client-side representation of an Azure SQL Server
automatic tuning object.

SqlServerAutomaticTuning. The template for a SqlServerAutomaticTuning update operation,


Update containing all the settings that can be modified.

SqlServerAutomaticTuning. Grouping of all the SqlServerAutomaticTuning update stages.


UpdateStages

SqlServerAutomaticTuning. The update stage setting the SQL server automatic tuning
UpdateStages.WithAutomatic desired state.
TuningMode

SqlServerAutomaticTuning. The update stage setting the server automatic tuning options.
UpdateStages.WithAutomatic
TuningOptions

SqlServerDnsAlias An immutable client-side representation of an Azure SQL Server


DNS alias.

SqlServerDnsAliasOperations A representation of the Azure SQL Server DNS alias operations.

SqlServerDnsAliasOperations. Grouping of all the SQL Server DNS alias definition stages.
DefinitionStages

SqlServerDnsAliasOperations. The final stage of the SQL Server DNS alias definition.
DefinitionStages.WithCreate

SqlServerDnsAliasOperations. The first stage of the SQL Server DNS alias definition.
DefinitionStages.WithSql
Server

SqlServerDnsAliasOperations. Grouping of the Azure SQL Server DNS alias common actions.
SqlServerDnsAliasActions
Definition

SqlServerDnsAliasOperations. Container interface for all the definitions that need to be


SqlServerDnsAliasOperations implemented.
Definition

SqlServerKey An immutable client-side representation of an Azure SQL Server


Key.

SqlServerKey.Update The template for a SQL Server Key update operation, containing
all the settings that can be modified.

SqlServerKey.UpdateStages Grouping of all the SQL Server Key update stages.

SqlServerKey.UpdateStages. The SQL Server Key definition to set the server key creation date.
WithCreationDate
SqlServerKey.UpdateStages. The SQL Server Key definition to set the thumbprint.
WithThumbprint

SqlServerKeyOperations A representation of the Azure SQL Server Key operations.

SqlServerKeyOperations. Grouping of all the SQL Server Key definition stages.


DefinitionStages

SqlServerKeyOperations. The final stage of the SQL Server Key definition.


DefinitionStages.WithCreate

SqlServerKeyOperations. The SQL Server Key definition to set the server key creation date.
DefinitionStages.WithCreation
Date

SqlServerKeyOperations. The SQL Server Key definition to set the server key type.
DefinitionStages.WithServer
KeyType

SqlServerKeyOperations. The first stage of the SQL Server Key definition.


DefinitionStages.WithSql
Server

SqlServerKeyOperations. The SQL Server Key definition to set the thumbprint.


DefinitionStages.With
Thumbprint

SqlServerKeyOperations.Sql Grouping of the Azure SQL Server Key common actions.


ServerKeyActionsDefinition

SqlServerKeyOperations.Sql Container interface for all the definitions that need to be


ServerKeyOperations implemented.
Definition

SqlServerSecurityAlertPolicy An immutable client-side representation of an Azure SQL Server


Security Alert Policy.

SqlServerSecurityAlertPolicy. The template for a SQL Server Security Alert Policy update
Update operation, containing all the settings that can be modified.

SqlServerSecurityAlertPolicy. Grouping of all the SQL Server Security Alert Policy update
UpdateStages stages.

SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set an
UpdateStages.WithDisabled array of alerts that are disabled.
Alerts

SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set if an
UpdateStages.WithEmail alert will be sent to the account administrators.
AccountAdmins
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set an
UpdateStages.WithEmail array of e-mail addresses to which the alert is sent.
Addresses

SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set the
UpdateStages.WithRetention number of days to keep in the Threat Detection audit logs.
Days

SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set the
UpdateStages.WithState state.

SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to specify
UpdateStages.WithStorage the storage account blob endpoint and access key.
Account

SqlServerSecurityAlertPolicy A representation of the Azure SQL Server Security Alert Policy


Operations operations.

SqlServerSecurityAlertPolicy Grouping of all the SQL Server Security Alert Policy definition
Operations.DefinitionStages stages.

SqlServerSecurityAlertPolicy The final stage of the SQL Server Security Alert Policy definition.
Operations.DefinitionStages.
WithCreate

SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set an array of
Operations.DefinitionStages. alerts that are disabled.
WithDisabledAlerts

SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set if an alert
Operations.DefinitionStages. will be sent to the account administrators.
WithEmailAccountAdmins

SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set an array of
Operations.DefinitionStages. e-mail addresses to which the alert is sent.
WithEmailAddresses

SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set the number
Operations.DefinitionStages. of days to keep in the Threat Detection audit logs.
WithRetentionDays

SqlServerSecurityAlertPolicy The first stage of the SQL Server Security Alert Policy definition.
Operations.DefinitionStages.
WithSqlServer

SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set the state.
Operations.DefinitionStages.
WithState

SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to specify the
Operations.DefinitionStages. storage account blob endpoint and access key.
WithStorageAccount

SqlServerSecurityAlertPolicy Grouping of the Azure SQL Server Security Alert Policy common
Operations.SqlServerSecurity actions.
AlertPolicyActionsDefinition

SqlServerSecurityAlertPolicy Container interface for all the definitions that need to be


Operations.SqlServerSecurity implemented.
AlertPolicyOperations
Definition

SqlServers Entry point to SQL Server management API.

SqlSubscriptionUsageMetric The result of SQL server usages per current subscription.

SqlSyncFullSchemaProperty An immutable client-side representation of an Azure SQL Server


Sync Group.

SqlSyncGroup An immutable client-side representation of an Azure SQL Server


Sync Group.

SqlSyncGroup.Update The template for a SQL Sync Group update operation, containing
all the settings that can be modified.

SqlSyncGroup.UpdateStages Grouping of all the SQL Sync Group update stages.

SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the conflict resolution
WithConflictResolutionPolicy policy.

SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the database login
WithDatabasePassword password.

SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the database user name.
WithDatabaseUserName

SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the sync frequency.
WithInterval

SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the schema.


WithSchema

SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the database ID to sync
WithSyncDatabaseId with.

SqlSyncGroupLogProperty An immutable client-side representation of an Azure SQL Server


Sync Group.

SqlSyncGroupOperations A representation of the Azure SQL Sync Group operations.

SqlSyncGroupOperations. Grouping of all the SQL Sync Group definition stages.


DefinitionStages
SqlSyncGroupOperations. The SQL Sync Group definition to set the conflict resolution
DefinitionStages.WithConflict policy.
ResolutionPolicy

SqlSyncGroupOperations. The final stage of the SQL Sync Group definition.


DefinitionStages.WithCreate

SqlSyncGroupOperations. The SQL Sync Group definition to set the database login
DefinitionStages.With password.
DatabasePassword

SqlSyncGroupOperations. The SQL Sync Group definition to set the database user name.
DefinitionStages.With
DatabaseUserName

SqlSyncGroupOperations. The SQL Sync Group definition to set the sync frequency.
DefinitionStages.WithInterval

SqlSyncGroupOperations. The SQL Sync Group definition to set the schema.


DefinitionStages.WithSchema

SqlSyncGroupOperations. The first stage of the SQL Sync Group definition.


DefinitionStages.WithSql
Server

SqlSyncGroupOperations. The SQL Sync Group definition to set the database ID to sync
DefinitionStages.WithSync with.
DatabaseId

SqlSyncGroupOperations. The SQL Sync Group definition to set the parent database name.
DefinitionStages.WithSync
GroupDatabase

SqlSyncGroupOperations.Sql Grouping of the Azure SQL Server Sync Group common actions.
SyncGroupActionsDefinition

SqlSyncGroupOperations.Sql Container interface for all the definitions that need to be


SyncGroupOperations implemented.
Definition

SqlSyncMember An immutable client-side representation of an Azure SQL Server


Sync Member.

SqlSyncMember.Update The template for a SQL Sync Group update operation, containing
all the settings that can be modified.

SqlSyncMember.UpdateStages Grouping of all the SQL Sync Group update stages.

SqlSyncMember.Update The SQL Sync Member definition to set the database type.
Stages.WithMemberDatabase
Type
SqlSyncMember.Update The SQL Sync Member definition to set the member database
Stages.WithMemberPassword password.

SqlSyncMember.Update The SQL Sync Member definition to set the member database
Stages.WithMemberUser user name.
Name

SqlSyncMember.Update The SQL Sync Member definition to set the sync direction.
Stages.WithSyncDirection

SqlSyncMemberOperations A representation of the Azure SQL Sync Member operations.

SqlSyncMemberOperations. Grouping of all the SQL Sync Member definition stages.


DefinitionStages

SqlSyncMemberOperations. The final stage of the SQL Sync Member definition.


DefinitionStages.WithCreate

SqlSyncMemberOperations. The SQL Sync Member definition to set the database type.
DefinitionStages.WithMember
DatabaseType

SqlSyncMemberOperations. The SQL Sync Member definition to set the member database
DefinitionStages.WithMember password.
Password

SqlSyncMemberOperations. The SQL Sync Member definition to set the member database.
DefinitionStages.WithMember
SqlDatabase

SqlSyncMemberOperations. The SQL Sync Member definition to set the member server and
DefinitionStages.WithMember database.
SqlServer

SqlSyncMemberOperations. The SQL Sync Member definition to set the member database
DefinitionStages.WithMember user name.
UserName

SqlSyncMemberOperations. The first stage of the SQL Sync Member definition.


DefinitionStages.WithSql
Server

SqlSyncMemberOperations. The SQL Sync Member definition to set the sync direction.
DefinitionStages.WithSync
Direction

SqlSyncMemberOperations. The SQL Sync Member definition to set the parent database
DefinitionStages.WithSync name.
GroupName

SqlSyncMemberOperations. The SQL Sync Member definition to set the parent database
DefinitionStages.WithSync name.
MemberDatabase

SqlSyncMemberOperations. Grouping of the Azure SQL Server Sync Member common


SqlSyncMemberActions actions.
Definition

SqlSyncMemberOperations. Container interface for all the definitions that need to be


SqlSyncMemberOperations implemented.
Definition

SqlVirtualNetworkRule An immutable client-side representation of an Azure SQL Server


Virtual Network Rule.

SqlVirtualNetworkRule. Grouping of all the SQL Virtual Network Rule definition stages.
DefinitionStages

SqlVirtualNetworkRule. The first stage of the SQL Server Virtual Network Rule definition.
DefinitionStages.
Blank<ParentT>

SqlVirtualNetworkRule. The final stage of the SQL Virtual Network Rule definition.
DefinitionStages.With
Attach<ParentT>

SqlVirtualNetworkRule. The SQL Virtual Network Rule definition to set ignore flag for the
DefinitionStages.WithService missing subnet's SQL service endpoint entry.
Endpoint<ParentT>

SqlVirtualNetworkRule. The SQL Virtual Network Rule definition to set the virtual
DefinitionStages.With network ID and the subnet name.
Subnet<ParentT>

SqlVirtualNetworkRule.Sql Container interface for all the definitions that need to be


VirtualNetworkRule implemented.
Definition<ParentT>

SqlVirtualNetworkRule.Update The template for a SQL Virtual Network Rule update operation,
containing all the settings that can be modified.

SqlVirtualNetworkRule.Update Grouping of all the SQL Virtual Network Rule update stages.
Stages

SqlVirtualNetworkRule.Update The SQL Virtual Network Rule definition to set ignore flag for the
Stages.WithServiceEndpoint missing subnet's SQL service endpoint entry.

SqlVirtualNetworkRule.Update The SQL Virtual Network Rule definition to set the virtual
Stages.WithSubnet network ID and the subnet name.

SqlVirtualNetworkRule A representation of the Azure SQL Virtual Network rule


Operations operations.
SqlVirtualNetworkRule Grouping of all the SQL Virtual Network Rule definition stages.
Operations.DefinitionStages

SqlVirtualNetworkRule The final stage of the SQL Virtual Network Rule definition.
Operations.DefinitionStages.
WithCreate

SqlVirtualNetworkRule The SQL Virtual Network Rule definition to set ignore flag for the
Operations.DefinitionStages. missing subnet's SQL service endpoint entry.
WithServiceEndpoint

SqlVirtualNetworkRule The first stage of the SQL Server Virtual Network Rule definition.
Operations.DefinitionStages.
WithSqlServer

SqlVirtualNetworkRule The SQL Virtual Network Rule definition to set the virtual
Operations.DefinitionStages. network ID and the subnet name.
WithSubnet

SqlVirtualNetworkRule Grouping of the Azure SQL Server Virtual Network Rule common
Operations.SqlVirtualNetwork actions.
RuleActionsDefinition

SqlVirtualNetworkRule Container interface for all the definitions that need to be


Operations.SqlVirtualNetwork implemented.
RuleOperationsDefinition

SqlWarehouse An immutable client-side representation of an Azure SQL


Warehouse.

TransparentDataEncryption An immutable client-side representation of an Azure SQL


database's TransparentDataEncryption.

TransparentDataEncryption An immutable client-side representation of an Azure SQL


Activity database's TransparentDataEncryptionActivity.

Enums
AuthenticationType Defines values for AuthenticationType.

AutomaticTuningDisabled Defines values for AutomaticTuningDisabledReason.


Reason

AutomaticTuningMode Defines values for AutomaticTuningMode.

AutomaticTuningOptionMode Defines values for AutomaticTuningOptionModeActual.


Actual

AutomaticTuningOptionMode Defines values for AutomaticTuningOptionModeDesired.


Desired

AutomaticTuningServerMode Defines values for AutomaticTuningServerMode.

AutomaticTuningServer Defines values for AutomaticTuningServerReason.


Reason

BackupLongTermRetention Defines values for BackupLongTermRetentionPolicyState.


PolicyState

BlobAuditingPolicyState Defines values for BlobAuditingPolicyState.

CapabilityStatus Defines values for CapabilityStatus.

CheckNameAvailabilityReason Defines values for CheckNameAvailabilityReason.

DataMaskingFunction Defines values for DataMaskingFunction.

DataMaskingRuleState Defines values for DataMaskingRuleState.

DataMaskingState Defines values for DataMaskingState.

GeoBackupPolicyState Defines values for GeoBackupPolicyState.

JobScheduleType Defines values for JobScheduleType.

JobTargetGroupMembership Defines values for JobTargetGroupMembershipType.


Type

MaxSizeUnits Defines values for MaxSizeUnits.

PerformanceLevelUnit Defines values for PerformanceLevelUnit.

ReadScale Defines values for ReadScale.

RecommendedIndexAction Defines values for RecommendedIndexAction.

RecommendedIndexState Defines values for RecommendedIndexState.

RecommendedIndexType Defines values for RecommendedIndexType.

ReplicationRole Defines values for ReplicationRole.

RestorePointType Defines values for RestorePointType.

SecurityAlertPolicyEmail Defines values for SecurityAlertPolicyEmailAccountAdmins.


AccountAdmins

SecurityAlertPolicyState Defines values for SecurityAlertPolicyState.

SecurityAlertPolicyUseServer Defines values for SecurityAlertPolicyUseServerDefault.


Default

SensitivityLabelSource Defines values for SensitivityLabelSource.


ServerConnectionType Defines values for ServerConnectionType.

SqlDatabaseBasicStorage The maximum allowed storage capacity for a "Basic" edition of


an Azure SQL Elastic Pool.

SqlDatabasePremiumStorage The maximum allowed storage capacity for a "Premium" edition


of an Azure SQL Elastic Pool.

SqlDatabaseStandardStorage The maximum allowed storage capacity for a "Standard" edition


of an Azure SQL Elastic Pool.

SqlElasticPoolBasicEDTUs The reserved eDTUs value range for a "Basic" edition of an Azure
SQL Elastic Pool.

SqlElasticPoolBasicMaxEDTUs The maximum limit of the reserved eDTUs value range for a
"Basic" edition of an Azure SQL Elastic Pool.

SqlElasticPoolBasicMinEDTUs The minimum limit of the reserved eDTUs value range for a
"Basic" edition of an Azure SQL Elastic Pool.

SqlElasticPoolPremiumEDTUs The reserved eDTUs value range for a "Premium" edition of an


Azure SQL Elastic Pool.

SqlElasticPoolPremiumMax The maximum limit of the reserved eDTUs value range for a
EDTUs "Premium" edition of an Azure SQL Elastic Pool.

SqlElasticPoolPremiumMin The minimum limit of the reserved eDTUs value range for a
EDTUs "Premium" edition of an Azure SQL Elastic Pool.

SqlElasticPoolPremiumSorage The maximum allowed storage capacity for a "Premium" edition


of an Azure SQL Elastic Pool.

SqlElasticPoolStandardEDTUs The reserved eDTUs value range for a "Standard" edition of an


Azure SQL Elastic Pool.

SqlElasticPoolStandardMax The maximum limit of the reserved eDTUs value range for a
EDTUs "Standard" edition of an Azure SQL Elastic Pool.

SqlElasticPoolStandardMin The minimum limit of the reserved eDTUs value range for a
EDTUs "Premium" edition of an Azure SQL Elastic Pool.

SqlElasticPoolStandardStorage The maximum allowed storage capacity for a "Standard" edition


of an Azure SQL Elastic Pool.

StorageKeyType Defines values for StorageKeyType.

TransparentDataEncryption Defines values for TransparentDataEncryptionStatus.


Status

VulnerabilityAssessmentPolicy Defines values for VulnerabilityAssessmentPolicyBaselineName.


BaselineName
Azure SQL Database REST API
Article • 10/13/2022

The Azure SQL Database REST API includes operations for managing Azure SQL
Database resources.

REST Operation Groups for 2022-02-01 Preview


Operation Group Description

Backup Short Term Retention Create, get, update, list a database's short term retention
Policies policy.

Data Warehouse User Activities Get and list the user activities of a data warehouse which
includes running and suspended queries.

Database Advanced Threat Create, get, update, list a database's Advanced Threat
Protection Settings Protection state.

Database Advisors Get and list database advisors

Database Automatic Tuning Get and update a database's automatic tuning.

Database Columns Get and list database columns.

Database Extensions Perform a database extension operation, like polybase import.

Database Operations Get a list of operations performed on the database or cancels


the asynchronous operation on the database.

Database Recommended Get and update a database recommended action.


Actions

Database Schemas Get and list database schemas.

Database Security Alert Policies Create, get, update, list a database's security alert policy.

Database Tables Get and list database tables.

Database Usages Get database usages.

Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assesment Rule Baselines assessment rule baseline.

Database Vulnerability Get, list, execute, export the vulnerability assessment scans of
Assessment Scans a database.
Operation Group Description

Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assessments assessment.

Databases Create, get, update, list, delete, import, export, rename, pause,
resume, upgrade SQL databases.

Deleted Servers Get, list, recover the deleted servers

Elastic Pool Operations Gets a list of operations performed on the elastic pool or
cancels the asynchronous operation on the elastic pool.

Elastic Pools Create, get, update, delete, failover the elastic pools.

Encryption Protectors Get, update, list, revalidate the existing encryption protectors.

Endpoint Certificates Get and list the certificates used on endpoints on the target
instance.

Failover Groups Create, get, update, list, delete, and failover a failover group.

Firewall Rules Create, get, update, delete, list firewall rules.

Instance Failover Groups Create, get, update, list, delete, and failover an instance
failover group.

Instance Pools Create, get, update, list, delete the instance pools.

Job Agents Create, get, update, list, delete the job agents.

Job Credentials Create, get, update, list, delete the job credentials.

Job Executions Create, get, update, list, cancel the job executions.

Job Step Executions Get and list the step executions of a job execution.

Job Steps Create, get, update, list, delete job steps for a job's current
version.

Job Target Executions Get or list the target executions of a job step execution.

Job Target Groups Create, get, update, list, delete the job target groups.

Job Versions Get or list job versions.

Jobs Create, get, update, list, delete jobs.

Ledger Digest Uploads Create, get, update, list the ledger digest upload configuration
for a database.

Location Capabilities Get the subscription capabilities available for the specified
location.
Operation Group Description

Long Term Retention Backups Create, get, update, list, delete a long term retention backup.

Long Term Retention Managed Create, get, update, list, delete a long term retention backup
Instance Backups for a managed database.

Long Term Retention Policies Get, list, set a database's long term retention policy.

Maintenance Window Options Get a list of available maintenance windows.

Maintenance Windows Get or set maintenance windows settings for a database.

Managed Backup Short Term Create, get, update, list a managed database's short term
Retention Policies retention policy.

Managed Database Columns Get or list managed database columns.

Managed Database Queries Get query or query execution statistics by query id of a


managed database.

Managed Database Restore Get managed database restore details.


Details

Managed Database Schemas Get or list managed database schemas.

Managed Database Security Create, get, update, list the managed database security alert
Alert Policies policies.

Managed Database Security Get a list of managed database security events.


Events

Managed Database Sensitivity Create, get, update, list the sensitivity labels of a given
Labels database. Or enable or disable sensitivity recommendations on
a given column.

Managed Database Tables Get or list managed database tables.

Managed Database Transparent Create, get, update, list a managed database's transparent
Data Encryption data encryption.

Managed Database Vulnerability Create, get, update, list a managed database's vulnerability
Assessment Rule Baselines assessment rule baseline.

Managed Database Vulnerability Get, list, execute, export a managed database's vulnerability
Assessment Scans assessment scans.

Managed Database Vulnerability Create, get, update, list, delete a managed database's
Assessments vulnerability assessments.

Managed Databases Create, get, update, list, delete, restore the managed
databases.
Operation Group Description

Managed Instance Create, get, update, list, delete managed instance


Administrators administrators.

Managed Instance Azure AD Get, set, list, delete the existing server Active Directory only
Only Authentications authentication properties.

Managed Instance Encryption Get, update, list, revalidate the existing encryption protectors
Protectors of a managed instance.

Managed Instance Keys Create, get, update, list, delete the managed instance keys.

Managed Instance Long Term Create, get, list, update the managed instance's long term
Retention Policies retention policies.

Managed Instance Operations Get, list, cancel the operations performed on the managed
instance.

Managed Instance Private Create, get, list, update, delete the private endpoint
Endpoint Connections connections on a managed instance.

Managed Instance Private Link Get or list the private link resources on the managed instance.
Resources

Managed Instance Tde Create a Transparent Data Encryption certificate for a given
Certificates managed instance.

Managed Instance Vulnerability Create, get, list, update, delete the managed instance's
Assessments vulnerability assessment policies.

Managed Instances Create, get, update, list, delete, failover the managed
instances.

Managed Restorable Dropped Create, get, update, list the managed restorable dropped
Database Backup Short Term database's short term retention policies
Retention Policies

Managed Server DNS Aliases Create, get, list, acquire a managed server DNS alias.

Managed Server Security Alert Create, get, list, update the managed server's security alert
Policies policies.

Operations List all of the available SQL Database REST API operations.

Outbound Firewall Rules Create, get, update, list, delete the outbound firewall rules.

Private Endpoint Connections Create, get, update, list, delete the private endpoint
connections on a server.

Private Link Resources Get or list the private link resources for SQL server.
Operation Group Description

Recoverable Managed Get or list recoverable managed databases.


Databases

Replication Links Get, list, delete, and failover replication links.

Restorable Dropped Databases Get or list restorable dropped databases.

Restorable Dropped Managed Get or list restorable dropped managed databases.


Databases

Restore Points Create, get, update, list, delete database restore points.

Sensitivity Labels Create, get, update, list the sensitivity labels of a given
database. Or enable or disable sensitivity recommendations on
a given column.

Server Advanced Threat Create, get, update, list the server's Advanced Threat
Protection Settings Protection states.

Server Advisors Get, list, update server advisors.

Server Automatic Tuning Get or update automatic tuning options on server.

Server Azure AD Administrators Create, get, list, update, delete Azure Active Directory
administrators in a server.

Server Azure AD Only Create, get, list, update, delete server Active Directory only
Authentications authentication property.

Server Blob Auditing Policies Create, get, update, list an extended server or database's blob
auditing policy.

Server Devops audit setting Create, get, list, update DevOps audit settings of a server.

Server Dns Aliases Create, get, list, acquire or delete a server DNS alias.

Server Keys Create, get, list, update, delete server keys.

Server Operations Get a list of operations performed on the server.

Server Security Alert Policies Create, get, list, update a server's security alert policies.

Server Trust Groups Create, get, list, update, delete server trust groups.

Server Vulnerability Assessments Create, get, list, update, delete the server vulnerability
assessment policies.

Servers Create, get, update, list, delete information about an Azure


SQL server. and determine whether a resource can be created
with the specified name.
Operation Group Description

Sql Agent Get or set the sql agent configuration to instance.

Subscription Usages Get or list the subscription usage metrics.

Sync Agents Create, get, list, update, delete the sync agents. Or generate a
sync agent key.

Sync Groups Create, get, list, update, delete the sync groups. Or refreshes a
hub database schema.

Sync Members Create, get, list, update, delete the sync members.

Tde Certificates Create a Transparent Data Encryption certificate for a given


server.

Time Zones Get or list the managed instance time zones.

Transparent Data Encryptions Create, get, list, update a logical database's transparent data
encryption configurations.

Usages Gets all instance pool usage metrics.

Virtual Clusters Create, get, list, update, delete the virtual clusters.

Virtual Network Rules Create, get, list, update, delete the virtual network rules.

Workload Classifiers Create, get, list, update, delete the workload classifiers.

Workload Groups Create, get, list, update, delete the workload groups.

Elastic Pool Activities Get the activities for an elastic pool.

Elastic Pool Database Activities Get the activities for databases in an elastic pool.

Recoverable Databases Get a recoverable database, or list all recoverable databases


for a server.

Transparent Data Encryption Returns a database's transparent data encryption operation


Activities result.

REST Operation Groups for 2021-11-01 Stable


Operation Group Description

Backup Short Term Retention Create, get, update, list a database's short term retention
Policies policy.
Operation Group Description

Data Warehouse User Activities Get and list the user activities of a data warehouse which
includes running and suspended queries.

Database Advanced Threat Create, get, update, list a database's Advanced Threat
Protection Settings Protection state.

Database Advisors Get and list database advisors

Database Automatic Tuning Get and update a database's automatic tuning.

Database Columns Get and list database columns.

Database Extensions Perform a database extension operation, like polybase import.

Database Operations Get a list of operations performed on the database or cancels


the asynchronous operation on the database.

Database Recommended Get and update a database recommended action.


Actions

Database Schemas Get and list database schemas.

Database Security Alert Policies Create, get, update, list a database's security alert policy.

Database Tables Get and list database tables.

Database Usages Get database usages.

Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assesment Rule Baselines assessment rule baseline.

Database Vulnerability Get, list, execute, export the vulnerability assessment scans of
Assessment Scans a database.

Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assessments assessment.

Databases Create, get, update, list, delete, import, export, rename, pause,
resume, upgrade SQL databases.

Deleted Servers Get, list, recover the deleted servers

Elastic Pool Operations Gets a list of operations performed on the elastic pool or
cancels the asynchronous operation on the elastic pool.

Elastic Pools Create, get, update, delete, failover the elastic pools.

Encryption Protectors Get, update, list, revalidate the existing encryption protectors.

Endpoint Certificates Get and list the certificates used on endpoints on the target
instance.
Operation Group Description

Failover Groups Create, get, update, list, delete, and failover a failover group.

Firewall Rules Create, get, update, delete, list firewall rules.

Instance Failover Groups Create, get, update, list, delete, and failover an instance
failover group.

Instance Pools Create, get, update, list, delete the instance pools.

Job Agents Create, get, update, list, delete the job agents.

Job Credentials Create, get, update, list, delete the job credentials.

Job Executions Create, get, update, list, cancel the job executions.

Job Step Executions Get and list the step executions of a job execution.

Job Steps Create, get, update, list, delete job steps for a job's current
version.

Job Target Executions Get or list the target executions of a job step execution.

Job Target Groups Create, get, update, list, delete the job target groups.

Job Versions Get or list job versions.

Jobs Create, get, update, list, delete jobs.

Ledger Digest Uploads Create, get, update, list the ledger digest upload configuration
for a database.

Location Capabilities Get the subscription capabilities available for the specified
location.

Long Term Retention Backups Create, get, update, list, delete a long term retention backup.

Long Term Retention Managed Create, get, update, list, delete a long term retention backup
Instance Backups for a managed database.

Long Term Retention Policies Get, list, set a database's long term retention policy.

Maintenance Window Options Get a list of available maintenance windows.

Maintenance Windows Get or set maintenance windows settings for a database.

Managed Backup Short Term Create, get, update, list a managed database's short term
Retention Policies retention policy.

Managed Database Columns Get or list managed database columns.


Operation Group Description

Managed Database Queries Get query or query execution statistics by query id of a


managed database.

Managed Database Restore Get managed database restore details.


Details

Managed Database Schemas Get or list managed database schemas.

Managed Database Security Create, get, update, list the managed database security alert
Alert Policies policies.

Managed Database Security Get a list of managed database security events.


Events

Managed Database Sensitivity Create, get, update, list the sensitivity labels of a given
Labels database. Or enable or disable sensitivity recommendations on
a given column.

Managed Database Tables Get or list managed database tables.

Managed Database Transparent Create, get, update, list a managed database's transparent
Data Encryption data encryption.

Managed Database Vulnerability Create, get, update, list a managed database's vulnerability
Assessment Rule Baselines assessment rule baseline.

Managed Database Vulnerability Get, list, execute, export a managed database's vulnerability
Assessment Scans assessment scans.

Managed Database Vulnerability Create, get, update, list, delete a managed database's
Assessments vulnerability assessments.

Managed Databases Create, get, update, list, delete, restore the managed
databases.

Managed Instance Create, get, update, list, delete managed instance


Administrators administrators.

Managed Instance Azure AD Get, set, list, delete the existing server Active Directory only
Only Authentications authentication properties.

Managed Instance Encryption Get, update, list, revalidate the existing encryption protectors
Protectors of a managed instance.

Managed Instance Keys Create, get, update, list, delete the managed instance keys.

Managed Instance Long Term Create, get, list, update the managed instance's long term
Retention Policies retention policies.
Operation Group Description

Managed Instance Operations Get, list, cancel the operations performed on the managed
instance.

Managed Instance Private Create, get, list, update, delete the private endpoint
Endpoint Connections connections on a managed instance.

Managed Instance Private Link Get or list the private link resources on the managed instance.
Resources

Managed Instance Tde Create a Transparent Data Encryption certificate for a given
Certificates managed instance.

Managed Instance Vulnerability Create, get, list, update, delete the managed instance's
Assessments vulnerability assessment policies.

Managed Instances Create, get, update, list, delete, failover the managed
instances.

Managed Restorable Dropped Create, get, update, list the managed restorable dropped
Database Backup Short Term database's short term retention policies
Retention Policies

Managed Server DNS Aliases Create, get, list, acquire a managed server DNS alias.

Managed Server Security Alert Create, get, list, update the managed server's security alert
Policies policies.

Operations List all of the available SQL Database REST API operations.

Outbound Firewall Rules Create, get, update, list, delete the outbound firewall rules.

Private Endpoint Connections Create, get, update, list, delete the private endpoint
connections on a server.

Private Link Resources Get or list the private link resources for SQL server.

Recoverable Managed Get or list recoverable managed databases.


Databases

Replication Links Get, list, delete, and failover replication links.

Restorable Dropped Databases Get or list restorable dropped databases.

Restorable Dropped Managed Get or list restorable dropped managed databases.


Databases

Restore Points Create, get, update, list, delete database restore points.
Operation Group Description

Sensitivity Labels Create, get, update, list the sensitivity labels of a given
database. Or enable or disable sensitivity recommendations on
a given column.

Server Advanced Threat Create, get, update, list the server's Advanced Threat
Protection Settings Protection states.

Server Advisors Get, list, update server advisors.

Server Automatic Tuning Get or update automatic tuning options on server.

Server Azure AD Administrators Create, get, list, update, delete Azure Active Directory
administrators in a server.

Server Azure AD Only Create, get, list, update, delete server Active Directory only
Authentications authentication property.

Server Blob Auditing Policies Create, get, update, list an extended server or database's blob
auditing policy.

Server Devops audit setting Create, get, list, update DevOps audit settings of a server.

Server Dns Aliases Create, get, list, acquire or delete a server DNS alias.

Server Keys Create, get, list, update, delete server keys.

Server Operations Get a list of operations performed on the server.

Server Security Alert Policies Create, get, list, update a server's security alert policies.

Server Trust Groups Create, get, list, update, delete server trust groups.

Server Vulnerability Assessments Create, get, list, update, delete the server vulnerability
assessment policies.

Servers Create, get, update, list, delete information about an Azure


SQL server. and determine whether a resource can be created
with the specified name.

Sql Agent Get or set the sql agent configuration to instance.

Subscription Usages Get or list the subscription usage metrics.

Sync Agents Create, get, list, update, delete the sync agents. Or generate a
sync agent key.

Sync Groups Create, get, list, update, delete the sync groups. Or refreshes a
hub database schema.

Sync Members Create, get, list, update, delete the sync members.
Operation Group Description

Tde Certificates Create a Transparent Data Encryption certificate for a given


server.

Time Zones Get or list the managed instance time zones.

Transparent Data Encryptions Create, get, list, update a logical database's transparent data
encryption configurations.

Usages Gets all instance pool usage metrics.

Virtual Clusters Create, get, list, update, delete the virtual clusters.

Virtual Network Rules Create, get, list, update, delete the virtual network rules.

Workload Classifiers Create, get, list, update, delete the workload classifiers.

Workload Groups Create, get, list, update, delete the workload groups.

See Also
Azure SQL Database
Azure SQL Data Warehouse
Azure SQL Database Elastic Pool
Latest Stable Version of Azure SQL Database REST API
Microsoft.Sql resource types
Article • 02/13/2023

This article lists the available versions for each resource type.

For a list of changes in each API version, see change log

Resource types and versions


Types Versions

instancePools 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

locations/deletedServers 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

locations/instanceFailoverGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

locations/longTermRetentionManagedInstances/longTermRetentionDatabases/longTermRetentionManagedInstanceBackups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

locations/managedDatabaseMoveOperationResults 2022-
05-01-
preview

Types Versions

locations/serverTrustGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

locations/timeZones 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

locations/usages 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

Types Versions

managedInstances 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

2015-
05-01-
preview

Types Versions

managedInstances/administrators 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

managedInstances/advancedThreatProtectionSettings 2022-
05-01-
preview

2022-
02-01-
preview

Types Versions

managedInstances/azureADOnlyAuthentications 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/databases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2018-
06-01-
preview

2017-
03-01-
preview

managedInstances/databases/advancedThreatProtectionSettings 2022-
05-01-
preview

2022-
02-01-
preview

Types Versions

managedInstances/databases/backupLongTermRetentionPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

managedInstances/databases/backupShortTermRetentionPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

managedInstances/databases/queries 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/databases/restoreDetails 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2018-
06-01-
preview

Types Versions

managedInstances/databases/schemas 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

managedInstances/databases/schemas/tables 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/databases/schemas/tables/columns 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/databases/schemas/tables/columns/sensitivityLabels 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

managedInstances/databases/securityAlertPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

managedInstances/databases/transparentDataEncryption 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/databases/vulnerabilityAssessments 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

managedInstances/databases/vulnerabilityAssessments/rules/baselines 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

managedInstances/databases/vulnerabilityAssessments/scans 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

managedInstances/distributedAvailabilityGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

Types Versions

managedInstances/dnsAliases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

managedInstances/dtc 2022-
05-01-
preview

2022-
02-01-
preview

managedInstances/encryptionProtector 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

managedInstances/endpointCertificates 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

managedInstances/keys 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

managedInstances/operations 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2018-
06-01-
preview

Types Versions

managedInstances/privateEndpointConnections 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

managedInstances/privateLinkResources 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/recoverableDatabases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

managedInstances/restorableDroppedDatabases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

managedInstances/restorableDroppedDatabases/backupShortTermRetentionPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

managedInstances/securityAlertPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

managedInstances/serverTrustCertificates 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

Types Versions

managedInstances/sqlAgent 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

managedInstances/vulnerabilityAssessments 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

servers 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2015-
05-01-
preview

2014-
04-01

Types Versions

servers/administrators 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2018-
06-01-
preview

2014-
04-01

servers/advancedThreatProtectionSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

Types Versions

servers/advisors 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

2014-
04-01

servers/auditingPolicies 2014-
04-01

Types Versions

servers/auditingSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/automaticTuning 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/azureADOnlyAuthentications 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

servers/communicationLinks 2014-
04-01

servers/connectionPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2014-
04-01

Types Versions

servers/databases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2017-
10-01-
preview

2017-
03-01-
preview

2014-
04-01

servers/databases/advancedThreatProtectionSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

Types Versions

servers/databases/advisors 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

2014-
04-01

Types Versions

servers/databases/advisors/recommendedActions 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

servers/databases/auditingPolicies 2014-
04-01

Types Versions

servers/databases/auditingSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

2015-
05-01-
preview

Types Versions

servers/databases/automaticTuning 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

Types Versions

servers/databases/backupLongTermRetentionPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/backupShortTermRetentionPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

servers/databases/connectionPolicies 2014-
04-01

servers/databases/dataMaskingPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2014-
04-01

servers/databases/dataMaskingPolicies/rules 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2014-
04-01

Types Versions

servers/databases/dataWarehouseUserActivities 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/extendedAuditingSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/extensions 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2014-
04-01

servers/databases/geoBackupPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2014-
04-01

servers/databases/ledgerDigestUploads 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

Types Versions

servers/databases/replicationLinks 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2014-
04-01

Types Versions

servers/databases/restorePoints 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/schemas 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

servers/databases/schemas/tables 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

servers/databases/schemas/tables/columns 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

Types Versions

servers/databases/schemas/tables/columns/sensitivityLabels 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/securityAlertPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

2014-
04-01

servers/databases/serviceTierAdvisors 2014-
04-01

servers/databases/sqlVulnerabilityAssessments 2022-
05-01-
preview

2022-
02-01-
preview

servers/databases/sqlVulnerabilityAssessments/baselines 2022-
05-01-
preview

2022-
02-01-
preview

servers/databases/sqlVulnerabilityAssessments/baselines/rules 2022-
05-01-
preview

2022-
02-01-
preview

Types Versions

servers/databases/sqlVulnerabilityAssessments/scans 2022-
05-01-
preview

2022-
02-01-
preview

servers/databases/sqlVulnerabilityAssessments/scans/scanResults 2022-
05-01-
preview

2022-
02-01-
preview

servers/databases/syncGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2015-
05-01-
preview

Types Versions

servers/databases/syncGroups/syncMembers 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

2015-
05-01-
preview

Types Versions

servers/databases/transparentDataEncryption 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2014-
04-01

Types Versions

servers/databases/vulnerabilityAssessments 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/vulnerabilityAssessments/rules/baselines 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/databases/vulnerabilityAssessments/scans 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

Types Versions

servers/databases/workloadGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

Types Versions

servers/databases/workloadGroups/workloadClassifiers 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2019-
06-01-
preview

Types Versions

servers/devOpsAuditingSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

servers/disasterRecoveryConfiguration 2014-
04-01

Types Versions

servers/dnsAliases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/elasticPools 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
10-01-
preview

2014-
04-01

servers/elasticPools/databases 2014-
04-01

Types Versions

servers/encryptionProtector 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

Types Versions

servers/extendedAuditingSettings 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/failoverGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

Types Versions

servers/firewallRules 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

2014-
04-01

servers/ipv6FirewallRules 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

Types Versions

servers/jobAgents 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/credentials 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs/executions 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs/executions/steps 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs/executions/steps/targets 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs/steps 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs/versions 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/jobs/versions/steps 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/jobAgents/targetGroups 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

Types Versions

servers/keys 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

servers/outboundFirewallRules 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

Types Versions

servers/privateEndpointConnections 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

servers/privateLinkResources 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

servers/recommendedElasticPools 2014-
04-01

servers/recommendedElasticPools/databases 2014-
04-01

servers/recoverableDatabases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2014-
04-01

Types Versions

servers/restorableDroppedDatabases 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2014-
04-01

Types Versions

servers/securityAlertPolicies 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2017-
03-01-
preview

servers/serviceObjectives 2014-
04-01

servers/sqlVulnerabilityAssessments 2022-
05-01-
preview

2022-
02-01-
preview

Types Versions

servers/syncAgents 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

Types Versions

servers/virtualNetworkRules 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

Types Versions

servers/vulnerabilityAssessments 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2018-
06-01-
preview

Types Versions

virtualClusters 2022-
05-01-
preview

2022-
02-01-
preview

2021-
11-01

2021-
11-01-
preview

2021-
08-01-
preview

2021-
05-01-
preview

2021-
02-01-
preview

2020-
11-01-
preview

2020-
08-01-
preview

2020-
02-02-
preview

2015-
05-01-
preview

SQL tools overview


Article • 04/03/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics
Analytics Platform System (PDW)

To manage your database, you need a tool. Whether your databases run in the cloud, on
Windows, on macOS, or on Linux, your tool doesn't need to run on the same platform as
the database.

You can view the links to the different SQL tools in the following tables.

7 Note

To download SQL Server, see Install SQL Server.

Recommended tools
The following tools provide a graphical user interface (GUI).

Tool Description Operating


system

A light-weight editor that can run on-demand SQL queries, view and Windows

save results as text, JSON, or Excel. Edit data, organize your favorite macOS

database connections, and browse database objects in a familiar Linux


object browsing experience.

Azure Data
Studio

Manage a SQL Server instance or database with full GUI support. Windows
Access, configure, manage, administer, and develop all components
of SQL Server, Azure SQL Database, and Azure Synapse Analytics.
Provides a single comprehensive utility that combines a broad
SQL Server group of graphical tools with a number of rich script editors to
Management provide access to SQL for developers and database administrators
Studio of all skill levels.
(SSMS)
Tool Description Operating
system

A modern development tool for building SQL Server relational Windows


databases, Azure SQL databases, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS)
SQL Server reports. With SSDT, you can design and deploy any SQL Server
Data Tools content type with the same ease as you would develop an
(SSDT) application in Visual Studio .

The mssql extension for Visual Studio Code is the official SQL Windows

Server extension that supports connections to SQL Server and rich macOS

editing experience for T-SQL in Visual Studio Code. Write T-SQL Linux
scripts in a light-weight editor.

Visual Studio
Code

Command-line tools
The tools below are the main command-line tools.

Tool Description Operating


system

bcp The bulk copy program utility (bcp) bulk copies data between an Windows

instance of Microsoft SQL Server and a data file in a user-specified macOS

format. Linux

mssql-cli mssql-cli is an interactive command-line tool for querying SQL Server. Windows

(preview) Also, query SQL Server with a command-line tool that features macOS

IntelliSense, syntax high-lighting, and more. Linux

mssql-conf mssql-conf configures SQL Server running on Linux. Linux

mssql- mssql-scripter is a multi-platform command-line experience for Windows

scripter scripting SQL Server databases. macOS

(preview) Linux

sqlcmd sqlcmd utility lets you enter Transact-SQL statements, system Windows

procedures, and script files at the command prompt. macOS

Linux

sqlpackage sqlpackage is a command-line utility that automates several database Windows

development tasks. macOS

Linux
Tool Description Operating
system

SQL Server SQL Server PowerShell provides cmdlets for working with SQL. Windows

PowerShell macOS

Linux

Migration and other tools


These tools are used to migrate, configure, and provide other features for SQL
databases.

Tool Description

Configuration Use SQL Server Configuration Manager to configure SQL Server services and
Manager configure network connectivity. Configuration Manager runs on Windows

Database Use Database Experimentation Assistant to evaluate a targeted version of SQL


Experimentation for a given workload.
Assistant

Data Migration The Data Migration Assistant tool helps you upgrade to a modern data
Assistant platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.

Distributed Use the Distributed Replay feature to help you assess the impact of future SQL
Replay Server upgrades. Also use Distributed Replay to help assess the impact of
hardware and operating system upgrades, and SQL Server tuning.

ssbdiagnose The ssbdiagnose utility reports issues in Service Broker conversations or the
configuration of Service Broker services.

SQL Server Use SQL Server Migration Assistant to automate database migration to SQL
Migration Server from Microsoft Access, DB2, MySQL, Oracle, and Sybase.
Assistant

If you're looking for additional tools that aren't mentioned on this page, see SQL
Command Prompt Utilities and Download SQL Server extended features and tools
Download SQL Server Management
Studio (SSMS)
Article • 06/28/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics
SQL Endpoint in Microsoft Fabric
Warehouse in
Microsoft Fabric

SQL Server Management Studio (SSMS) is an integrated environment for managing any
SQL infrastructure, from SQL Server to Azure SQL Database. SSMS provides tools to
configure, monitor, and administer instances of SQL Server and databases. Use SSMS to
deploy, monitor, and upgrade the data-tier components used by your applications and
build queries and scripts.

Use SSMS to query, design, and manage your databases and data warehouses, wherever
they are - on your local computer or in the cloud.

Download SSMS

Free Download for SQL Server Management Studio (SSMS) 19.1

SSMS 19.1 is the latest general availability (GA) version. If you have a preview version of
SSMS 19 installed, you should uninstall it before installing SSMS 19.1. If you have SSMS
19.x installed, installing SSMS 19.1 upgrades it to 19.1.

Release number: 19.1


Build number: 19.1.56.0
Release date: May 24, 2023

By using SQL Server Management Studio, you agree to its license terms and privacy
statement . If you have comments or suggestions or want to report issues, the best
way to contact the SSMS team is at SQL Server user feedback .

The SSMS 19.x installation doesn't upgrade or replace SSMS versions 18.x or earlier.
SSMS 19.x installs alongside previous versions, so both versions are available for use.
However, if you have an earlier preview version of SSMS 19 installed, you must uninstall
it before installing SSMS 19.1. You can see if you have a preview version by going to the
Help > About window.

If a computer contains side-by-side installations of SSMS, verify you start the correct
version for your specific needs. The latest version is labeled Microsoft SQL Server
Management Studio v19.1.

) Important

Beginning with SQL Server Management Studio (SSMS) 18.7, Azure Data Studio is
automatically installed alongside SSMS. Users of SQL Server Management Studio
are now able to benefit from the innovations and features in Azure Data Studio.
Azure Data Studio is a cross-platform and open-source desktop tool for your
environments, whether in the cloud, on-premises, or hybrid.

To learn more about Azure Data Studio, check out What is Azure Data Studio or
the FAQ.

Available languages
This release of SSMS can be installed in the following languages:

SQL Server Management Studio 19.1:

Chinese (Simplified) | Chinese (Traditional) | English (United States) | French |


German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian |
Spanish

 Tip

If you are accessing this page from a non-English language version and want to see
the most up-to-date content, please select Read in English at the top of this page.
You can download different languages from the US-English version site by selecting
available languages.

7 Note

The SQL Server PowerShell module is a separate install through the PowerShell
Gallery. For more information, see Download SQL Server PowerShell Module.

What's new
For details and more information about what's new in this release, see Release notes for
SQL Server Management Studio.
Previous versions
This article is for the latest version of SSMS only. To download previous versions of
SSMS, visit Previous SSMS releases.

7 Note

In December 2021, releases of SSMS prior to 18.6 will no longer authenticate to


Database Engines through Azure Active Directory with MFA.
To continue utilizing
Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.

Connectivity to Azure Analysis Services through Azure Active Directory with MFA
requires SSMS 18.5.1 or later.

Unattended install
You can install SSMS using PowerShell.

Follow the steps below if you want to install SSMS in the background with no GUI
prompts.

1. Launch PowerShell with elevated permissions.

2. Type the command below.

PowerShell

$media_path = "<path where SSMS-Setup-ENU.exe file is located>"

$install_path = "<root location where all SSMS files will be


installed>"

$params = " /Install /Quiet SSMSInstallRoot=$install_path"

Start-Process -FilePath $media_path -ArgumentList $params -Wait

Example:

PowerShell

$media_path = "C:\Installers\SSMS-Setup-ENU.exe"

$install_path = "$env:SystemDrive\SSMSto"

$params = "/Install /Quiet SSMSInstallRoot=`"$install_path`""

Start-Process -FilePath $media_path -ArgumentList $params -Wait

You can also pass /Passive instead of /Quiet to see the setup UI.

3. If all goes well, you can see SSMS installed at


%systemdrive%\SSMSto\Common7\IDE\Ssms.exe based on the example. If
something went wrong, you could inspect the error code returned and review the
log file in %TEMP%\SSMSSetup.

Installation with Azure Data Studio


SSMS installs Azure Data Studio by default.
The installation of Azure Data Studio by SSMS is skipped if an equal or higher
version of Azure Data Studio is already installed.
The Azure Data Studio version can be found in the release notes.
The Azure Data Studio system installer requires the same security rights as the
SSMS installer.
The Azure Data Studio installation is completed with the default Azure Data Studio
installation options. These are to create a Start Menu folder and add Azure Data
Studio to PATH. A desktop shortcut isn't created, and Azure Data Studio isn't
registered as a default editor for any file type.
Localization of Azure Data Studio is accomplished through Language Pack
extensions. To localize Azure Data Studio, download the corresponding language
pack from the extension marketplace.
At this time, the installation of Azure Data Studio can be skipped by launching the
SSMS installer with the command line flag DoNotInstallAzureDataStudio=1 .

Uninstall
SSMS may install shared components if it's determined they're missing during SSMS
installation. SSMS won't automatically uninstall these components when you uninstall
SSMS.

The shared components are:

Azure Data Studio


Microsoft OLE DB Driver for SQL Server
Microsoft ODBC Driver 17 for SQL Server
Microsoft Visual C++ 2013 Redistributable (x86)
Microsoft Visual C++ 2017 Redistributable (x86)
Microsoft Visual C++ 2017 Redistributable (x64)
Microsoft Visual Studio Tools for Applications 2019
These components aren't uninstalled because they can be shared with other products. If
uninstalled, you may run the risk of disabling other products.

Supported SQL offerings


This version of SSMS works with SQL Server 2014 and higher and provides the
most significant level of support for working with the latest cloud features in Azure
SQL Database, Azure Synapse Analytics, and Microsoft Fabric.
Additionally, SSMS 19.x can be installed alongside with SSMS 18.x, SSMS 17.x,
SSMS 16.x.
SQL Server Integration Services (SSIS) - SSMS version 17.x or later doesn't support
connecting to the legacy SQL Server Integration Services service. To connect to an
earlier version of the legacy Integration Services, use the version of SSMS aligned
with the version of SQL Server. For example, use SSMS 16.x to connect to the
legacy SQL Server 2016 Integration Services service. SSMS 17.x and SSMS 16.x can
be installed on the same computer. Since the release of SQL Server 2012, the SSIS
Catalog database, SSISDB, is the recommended way to store, manage, run, and
monitor Integration Services packages. For details, see SSIS Catalog.

SSMS System Requirements


The current release of SSMS supports the following 64-bit platforms when used with the
latest available service pack:

Supported Operating Systems:

Windows 11 (64-bit)
Windows 10 (64-bit) version 1607 (10.0.14393) or later
Windows Server 2022 (64-bit)
Windows Server 2019 (64-bit)
Windows Server 2016 (64-bit)

Supported hardware:

1.8 GHz or faster x86 (Intel, AMD) processor. Dual-core or better recommended
2 GB of RAM; 4 GB of RAM recommended (2.5 GB minimum if running on a virtual
machine)
Hard disk space: Minimum of 2 GB up to 10 GB of available space

7 Note
SSMS is available only as a 32-bit application for Windows. If you need a tool that
runs on operating systems other than Windows, we recommend Azure Data Studio.
Azure Data Studio is a cross-platform tool that runs on macOS, Linux, and
Windows. For details, see Azure Data Studio.


Get help for SQL tools
All the ways to get help
SSMS user feedback .
Submit an Azure Data Studio Git issue
Contribute to Azure Data Studio
SQL Client Tools Forum
SQL Server Data Tools - MSDN forum
Support options for business users

Next steps
SQL tools
SQL Server Management Studio documentation
Azure Data Studio
Download SQL Server Data Tools (SSDT)
Latest updates
Azure Data Architecture Guide
SQL Server Blog


Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


Download SQL Server Data Tools (SSDT)
for Visual Studio
Article • 07/07/2023

Applies to:
SQL Server
Azure SQL Database
Azure Synapse Analytics

SQL Server Data Tools (SSDT) is a modern development tool for building SQL Server
relational databases, databases in Azure SQL, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, you
can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.

SSDT for Visual Studio 2022

Changes in SSDT for Visual Studio 2022


The core SSDT functionality to create database projects has remained integral to Visual
Studio.

7 Note

There's no SSDT standalone installer for Visual Studio 2022.

Install SSDT with Visual Studio 2022


If Visual Studio 2022 is already installed, you can edit the list of workloads to include
SSDT. If you don't have Visual Studio 2022 installed, then you can download and install
Visual Studio 2022 .

To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.

1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.

3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.

For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .

Analysis Services
Integration Services
Reporting Services

Supported SQL versions in Visual Studio 2022


Project Templates SQL Platforms Supported

Relational databases SQL Server 2016 (13.x) - SQL Server 2022 (16.x)

Azure SQL Database, Azure SQL Managed Instance

Azure Synapse Analytics (dedicated pools only)

Analysis Services models


SQL Server 2016 - SQL Server 2022

Reporting Services reports

Integration Services packages SQL Server 2019 - SQL Server 2022

License terms for Visual Studio


To understand the license terms and use cases for Visual Studio, refer to (Visual Studio
License Directory)[https://visualstudio.microsoft.com/license-terms/]. For example, if you
are using the Community Edition of Visual Studio for SQL Server Data Tools, review the
EULA for that specific edition of Visual Studio in the Visual Studio License Directory.

SSDT for Visual Studio 2019

Changes in SSDT for Visual Studio 2019


The core SSDT functionality to create database projects has remained integral to Visual
Studio.

With Visual Studio 2019, the required functionality to enable Analysis Services,
Integration Services, and Reporting Services projects has moved into the respective
Visual Studio (VSIX) extensions only.

7 Note

There's no SSDT standalone installer for Visual Studio 2019.

Install SSDT with Visual Studio 2019


If Visual Studio 2019 is already installed, you can edit the list of workloads to include
SSDT. If you don't have Visual Studio 2019 installed, then you can download and install
Visual Studio 2019 Community .
To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.

1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".

2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.

3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.

For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .

Analysis Services
Integration Services
Reporting Services

Supported SQL versions in Visual Studio 2019

Project Templates SQL Platforms Supported

Relational databases SQL Server 2012 - SQL Server 2019

Azure SQL Database, Azure SQL Managed Instance

Azure Synapse Analytics (dedicated pools only)

Analysis Services models


SQL Server 2008 - SQL Server 2019

Reporting Services reports

Integration Services packages SQL Server 2012 - SQL Server 2022

Offline installation
For scenarios where offline installation is required, such as low bandwidth or isolated
networks, SSDT is available for offline installation. Two approaches are available:

For a single machine, Download All, then install


For installation on one or more machines, use the Visual Studio bootstrapper from
the command line

For more details you can follow the Step-by-Step Guidelines for Offline Installation

Previous versions
To download and install SSDT for Visual Studio 2017, or an older version of SSDT, see
Previous releases of SQL Server Data Tools (SSDT and SSDT-BI).

See Also
SSDT MSDN Forum

SSDT Team Blog

DACFx API Reference

Download SQL Server Management Studio (SSMS)


Next steps
After installation of SSDT, work through these tutorials to learn how to create databases,
packages, data models, and reports using SSDT.

Project-Oriented Offline Database Development

SSIS Tutorial: Create a Simple ETL Package

Analysis Services tutorials

Create a Basic Table Report (SSRS Tutorial)


Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback


Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


bcp utility
Article • 07/12/2023

Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW)

The bulk copy program utility (bcp) bulk copies data between an instance of Microsoft
SQL Server and a data file in a user-specified format.

7 Note

For using bcp on Linux, see Install sqlcmd and bcp on Linux.

For detailed information about using bcp with Azure Synapse Analytics, see Load
data with bcp.

The bcp utility can be used to import large numbers of new rows into SQL Server tables
or to export data out of tables into data files. Except when used with the queryout
option, the utility requires no knowledge of Transact-SQL. To import data into a table,
you must either use a format file created for that table or understand the structure of
the table and the types of data that are valid for its columns.

For the syntax conventions that are used for the bcp syntax, see Transact-SQL syntax
conventions.

7 Note

If you use bcp to back up your data, create a format file to record the data format.
bcp data files do not include any schema or format information, so if a table or
view is dropped and you do not have a format file, you may be unable to import
the data.

Download the latest version of the bcp utility


The command-line tools are General Availability (GA), however they're being released
with the installer package for SQL Server 2019 (15.x).

Version information
Release number: 15.0.4298.1
Build number: 15.0.4298.1
Release date: April 7, 2023

The new version of sqlcmd supports Azure AD authentication, including Multi-Factor


Authentication (MFA) support for SQL Database, Azure Synapse Analytics, and Always
Encrypted features.

The new bcp supports Azure AD authentication, including Multi-Factor Authentication


(MFA) support for SQL Database and Azure Synapse Analytics.

System requirements
Windows 7, Windows 8, Windows 8.1, Windows 10, Windows 11

Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows
Server 2012 R2, Windows Server 2016, Windows Server 2019, Windows Server 2022

This component requires both Windows Installer 4.5 and the latest Microsoft ODBC
Driver for SQL Server.

To check the bcp version, execute bcp -v command, and confirm that 15.0.4298.1 or
later is in use.

Syntax
Console

bcp [database_name.] schema.{table_name | view_name | "query"}


{in data_file | out data_file | queryout data_file | format nul}

[-a packet_size]
[-b batch_size]
[-c]
[-C { ACP | OEM | RAW | code_page } ]
[-d database_name]
[-D]
[-e err_file]
[-E]
[-f format_file]
[-F first_row]
[-G Azure Active Directory Authentication]
[-h"hint [,...n]"]
[-i input_file]
[-k]
[-K application_intent]
[-l login_timeout]
[-L last_row]
[-m max_errors]
[-n]
[-N]
[-o output_file]
[-P password]
[-q]
[-r row_term]
[-R]
[-S [server_name[\instance_name]]]
[-t field_term]
[-T]
[-U login_id]
[-v]
[-V (80 | 90 | 100 | 110 | 120 | 130 | 140 | 150 | 160 ) ]
[-w]
[-x]

Command-line options

database_name

The name of the database in which the specified table or view resides. If not specified,
this is the default database for the user.

You can also explicitly specify the database name with -d .

schema
The name of the owner of the table or view. schema is optional if the user performing
the operation owns the specified table or view. If schema isn't specified and the user
performing the operation doesn't own the specified table or view, SQL Server returns an
error message, and the operation is canceled.

table_name

The name of the destination table when importing data into SQL Server ( in ), and the
source table when exporting data from SQL Server ( out ).

view_name

The name of the destination view when copying data into SQL Server ( in ), and the
source view when copying data from SQL Server ( out ). Only views in which all columns
refer to the same table can be used as destination views. For more information on the
restrictions for copying data into views, see INSERT (Transact-SQL).

"query"

A Transact-SQL query that returns a result set. If the query returns multiple result sets,
only the first result set is copied to the data file; subsequent result sets are ignored. Use
double quotation marks around the query and single quotation marks around anything
embedded in the query. queryout must also be specified when bulk copying data from a
query.

The query can reference a stored procedure as long as all tables referenced inside the
stored procedure exist prior to executing the bcp statement. For example, if the stored
procedure generates a temp table, the bcp statement fails because the temp table is
available only at run time and not at statement execution time. In this case, consider
inserting the results of the stored procedure into a table and then use bcp to copy the
data from the table into a data file.

in

copies from a file into the database table or view. Specifies the direction of the bulk
copy.

out
Copies from the database table or view to a file. Specifies the direction of the bulk copy.

If you specify an existing file, the file is overwritten. When extracting data, the bcp utility
represents an empty string as a null and a null string as an empty string.

data_file

The full path of the data file. When data is bulk imported into SQL Server, the data file
contains the data to be copied into the specified table or view. When data is bulk
exported from SQL Server, the data file contains the data copied from the table or view.
The path can have from 1 through 255 characters. The data file can contain a maximum
of 2^63 - 1 rows.

queryout
Copies from a query and must be specified only when bulk copying data from a query.
format
Creates a format file based on the option specified ( -n , -c , -w , or -N ) and the table or
view delimiters. When bulk copying data, the bcp command can refer to a format file,
which saves you from reentering format information interactively. The format option
requires the -f option; creating an XML format file, also requires the -x option. For
more information, see Create a Format File (SQL Server). You must specify nul as the
value ( format nul ).

-a packet_size

Specifies the number of bytes, per network packet, sent to and from the server. A server
configuration option can be set by using SQL Server Management Studio (or the
sp_configure system stored procedure). However, the server configuration option can

be overridden on an individual basis by using this option. packet_size can be from 4096
bytes to 65,535 bytes; the default is 4096 .

Increased packet size can enhance performance of bulk-copy operations. If a larger


packet is requested but can't be granted, the default is used. The performance statistics
generated by the bcp utility show the packet size used.

-b batch_size

Specifies the number of rows per batch of imported data. Each batch is imported and
logged as a separate transaction that imports the whole batch before being committed.
By default, all the rows in the data file are imported as one batch. To distribute the rows
among multiple batches, specify a batch_size that is smaller than the number of rows in
the data file. If the transaction for any batch fails, only insertions from the current batch
are rolled back. Batches already imported by committed transactions are unaffected by a
later failure.

Don't use this option with the -h "ROWS_PER_BATCH=<bb>" option.

-c
Performs the operation using a character data type. This option doesn't prompt for each
field; it uses char as the storage type, without prefixes and with \t (tab character) as the
field separator and \r\n (newline character) as the row terminator. -c isn't compatible
with -w .

For more information, see Use Character Format to Import or Export Data (SQL Server).
-C { ACP | OEM | RAW | code_page }
Specifies the code page of the data in the data file. code_page is relevant only if the data
contains char, varchar, or text columns with character values greater than 127 or less
than 32.

7 Note

We recommend specifying a collation name for each column in a format file, except
when you want the 65001 option to have priority over the collation/code page
specification.

Code page Description


value

ACP ANSI/Microsoft Windows (ISO 1252).

OEM Default code page used by the client. This is the default code page used if -C isn't
specified.

RAW No conversion from one code page to another occurs. This is the fastest option
because no conversion occurs.

code_page Specific code page number; for example, 850.

Versions prior to version 13 (SQL Server 2016 (13.x)) don't support code page
65001 (UTF-8 encoding). Versions beginning with 13 can import UTF-8 encoding
to earlier versions of SQL Server.

-d database_name
Specifies the database to connect to. By default, bcp connects to the user's default
database. If -d database_name and a three part name (database_name.schema.table,
passed as the first parameter to bcp) are specified, an error occurs because you can't
specify the database name twice. If database_name begins with a hyphen ( - ) or a
forward slash ( / ), don't add a space between -d and the database name.

-D
Causes the value passed to the bcp -S option to be interpreted as a data source name
(DSN). A DSN may be used to embed driver options to simplify command lines, enforce
driver options that aren't otherwise accessible from the command line such as
MultiSubnetFailover, or to help protect sensitive credentials from being discoverable as
command line arguments. For more information, see DSN Support in sqlcmd and bcp in
Connecting with sqlcmd.

-e err_file

Specifies the full path of an error file used to store any rows that the bcp utility can't
transfer from the file to the database. Error messages from the bcp command go to the
workstation of the user. If this option isn't used, an error file isn't created.

If err_file begins with a hyphen ( - ) or a forward slash ( / ), don't include a space between
-e and the err_file value.

-E
Specifies that identity value or values in the imported data file are to be used for the
identity column. If -E isn't given, the identity values for this column in the data file
being imported are ignored, and SQL Server automatically assigns unique values based
on the seed and increment values specified during table creation. For more information,
see DBCC CHECKIDENT.

If the data file doesn't contain values for the identity column in the table or view, use a
format file to specify that the identity column in the table or view should be skipped
when importing data; SQL Server automatically assigns unique values for the column.

The -E option has a special permissions requirement. For more information, see
"Remarks" later in this article.

-f format_file

Specifies the full path of a format file. The meaning of this option depends on the
environment in which it is used, as follows:

If -f is used with the format option, the specified format_file is created for the
specified table or view. To create an XML format file, also specify the -x option. For
more information, see Create a Format File (SQL Server).

If used with the in or out option, -f requires an existing format file.

7 Note

Using a format file in with the in or out option is optional. In the absence of
the -f option, if -n , -c , -w , or -N is not specified, the command prompts for
format information and lets you save your responses in a format file (whose
default file name is bcp.fmt ).

If format_file begins with a hyphen ( - ) or a forward slash ( / ), don't include a space


between -f and the format_file value.

-F first_row

Specifies the number of the first row to export from a table or import from a data file.
This parameter requires a value greater than ( > ) 0 but less than ( < ) or equal to ( = ) the
total number rows. In the absence of this parameter, the default is the first row of the
file.

first_row can be a positive integer with a value up to 2^63-1. -F first_row is 1-based.

-G
Applies to: Azure SQL Database and Azure Synapse Analytics only.

This switch is used by the client when connecting to Azure SQL Database or Azure
Synapse Analytics to specify that the user be authenticated using Azure Active Directory
authentication. The -G switch requires version 14.0.3008.27 or later versions. To
determine your version, execute bcp -v . For more information, see Use Azure Active
Directory Authentication for authentication with SQL Database or Azure Synapse
Analytics.

) Important

Azure AD Interactive Authentication is not currently supported on Linux or macOS.


Azure AD Integrated Authentication requires Microsoft ODBC Driver 17 for SQL
Server version 17.6.1 and later versions, and a properly configured Kerberos
environment.

 Tip

To check if your version of bcp includes support for Azure Active Directory (Azure
AD) Authentication, type bcp --help and verify that you see -G in the list of
available arguments.
Azure Active Directory Username and Password

When you want to use an Azure Active Directory user name and password, you can
provide the -G option and also use the user name and password by providing the
-U and -P options.

The following example exports data using Azure AD username and password
credentials. The example exports table bcptest from database testdb from Azure
server aadserver.database.windows.net and stores the data in file
c:\last\data1.dat :

Windows Command Prompt

bcp bcptest out "c:\last\data1.dat" -c -t -S


aadserver.database.windows.net -d testdb -G -U
alice@aadtest.onmicrosoft.com -P xxxxx

The following example imports data using Azure AD Username and Password
where user and password are an Azure AD credential. The example imports data
from file c:\last\data1.dat into table bcptest for database testdb on Azure
server aadserver.database.windows.net using Azure AD User/Password:

Windows Command Prompt

bcp bcptest in "c:\last\data1.dat" -c -t -S


aadserver.database.windows.net -d testdb -G -U
alice@aadtest.onmicrosoft.com -P xxxxx

Azure Active Directory Integrated

For Azure Active Directory Integrated authentication, provide the -G option


without a user name or password. This configuration assumes that the current
Windows user account (the account the bcp command is running under) is
federated with Azure AD:

The following example exports data using Azure AD-Integrated account. The
example exports table bcptest from database testdb using Azure AD Integrated
from Azure server aadserver.database.windows.net and stores the data in file
c:\last\data2.dat :

Windows Command Prompt

bcp bcptest out "c:\last\data2.dat" -S aadserver.database.windows.net -


d testdb -G -c -t

The following example imports data using Azure AD-Integrated auth. The example
imports data from file c:\last\data2.txt into table bcptest for database testdb
on Azure server aadserver.database.windows.net using Azure AD Integrated auth:

Windows Command Prompt

bcp bcptest in "c:\last\data2.dat" -S aadserver.database.windows.net -d


testdb -G -c -t

Azure Active Directory Interactive

The Azure AD Interactive authentication for Azure SQL Database and Azure
Synapse Analytics, allows you to use an interactive method supporting multi-factor
authentication. For more information, see Active Directory Interactive
Authentication.

Azure AD interactive requires bcp version 15.0.1000.34 or later as well as ODBC


version 17.2 or later.

To enable interactive authentication, provide the -G option with user name ( -U )


only, and no password.

The following example exports data using Azure AD interactive mode indicating
username where user represents an Azure AD account. This is the same example
used in the previous section: Azure Active Directory Username and Password.

Interactive mode requires a password to be manually entered, or for accounts with


multi-factor authentication enabled, complete your configured MFA authentication
method.

Windows Command Prompt

bcp bcptest out "c:\last\data1.dat" -c -t -S


aadserver.database.windows.net -d testdb -G -U
alice@aadtest.onmicrosoft.com

In case an Azure AD user is a domain federated one using Windows account, the
user name required in the command line, contains its domain account (for
example, joe@contoso.com ):

Windows Command Prompt


bcp bcptest out "c:\last\data1.dat" -c -t -S
aadserver.database.windows.net -d testdb -G -U joe@contoso.com

If guest users exist in a specific Azure AD and are part of a group that exists in SQL
Database that has database permissions to execute the bcp command, their guest
user alias is used (for example, keith0@adventure-works.com ).

-h "hints [, ... n]"


Specifies the hint or hints to be used during a bulk import of data into a table or view.

ORDER (column [ASC | DESC] [, ...n])

The sort order of the data in the data file. Bulk import performance is improved if
the data being imported is sorted according to the clustered index on the table, if
any. If the data file is sorted in a different order, that is other than the order of a
clustered index key, or if there is no clustered index on the table, the ORDER clause
is ignored. The column names supplied must be valid column names in the
destination table. By default, bcp assumes the data file is unordered. For optimized
bulk import, SQL Server also validates that the imported data is sorted.

ROWS_PER_BATCH = bb

Number of rows of data per batch (as bb). Used when -b isn't specified, resulting
in the entire data file being sent to the server as a single transaction. The server
optimizes the bulkload according to the value bb. By default, ROWS_PER_BATCH is
unknown.

KILOBYTES_PER_BATCH = cc

Approximate number of kilobytes of data per batch (as cc). By default,


KILOBYTES_PER_BATCH is unknown.

TABLOCK

Specifies that a bulk update table-level lock is acquired for the duration of the
bulkload operation; otherwise, a row-level lock is acquired. This hint significantly
improves performance because holding a lock for the duration of the bulk-copy
operation reduces lock contention on the table. A table can be loaded concurrently
by multiple clients if the table has no indexes and TABLOCK is specified. By default,
locking behavior is determined by the table option table lock on bulkload.
7 Note

If the target table is clustered columnstore index, TABLOCK hint is not


required for loading by multiple concurrent clients because each concurrent
thread is assigned a separate rowgroup within the index and loads data into
it. Please refer to columnstore index conceptual articles for details,

CHECK_CONSTRAINTS

Specifies that all constraints on the target table or view must be checked during
the bulk-import operation. Without the CHECK_CONSTRAINTS hint, any CHECK,
and FOREIGN KEY constraints are ignored, and after the operation the constraint
on the table is marked as not-trusted.

7 Note

UNIQUE, PRIMARY KEY, and NOT NULL constraints are always enforced.

At some point, you need to check the constraints on the entire table. If the table
was nonempty before the bulk import operation, the cost of revalidating the
constraint may exceed the cost of applying CHECK constraints to the incremental
data. Therefore, we recommend that normally you enable constraint checking
during an incremental bulk import.

A situation in which you might want constraints disabled (the default behavior) is if
the input data contains rows that violate constraints. With CHECK constraints
disabled, you can import the data and then use Transact-SQL statements to
remove data that isn't valid.

7 Note

bcp now enforces data validation and data checks that might cause scripts to
fail if they're executed on invalid data in a data file.

7 Note

The -m max_errors switch does not apply to constraint checking.

FIRE_TRIGGERS
Specified with the in argument, any insert triggers defined on the destination
table will run during the bulk-copy operation. If FIRE_TRIGGERS isn't specified, no
insert triggers will run. FIRE_TRIGGERS is ignored for the out , queryout , and
format arguments.

-i input_file

Specifies the name of a response file, containing the responses to the command prompt
questions for each data field when a bulk copy is being performed using interactive
mode ( -n , -c , -w , or -N not specified).

If input_file begins with a hyphen ( - ) or a forward slash ( / ), don't include a space


between -i and the input_file value.

-k
Specifies that empty columns should retain a null value during the operation, rather
than have any default values for the columns inserted. For more information, see Keep
Nulls or Use Default Values During Bulk Import (SQL Server).

-K application_intent

Declares the application workload type when connecting to a server. The only value that
is possible is ReadOnly. If -K isn't specified, the bcp utility doesn't support connectivity
to a secondary replica in an Always On availability group. For more information, see
Active Secondaries: Readable Secondary Replicas (Always On Availability Groups).

-l login_timeout
Specifies a login timeout. The -l option specifies the number of seconds before a login
to SQL Server times out when you try to connect to a server. The default login timeout is
15 seconds. The login timeout must be a number between 0 and 65534. If the value
supplied isn't numeric or doesn't fall into that range, bcp generates an error message. A
value of 0 specifies an infinite timeout.

-L last_row

Specifies the number of the last row to export from a table or import from a data file.
This parameter requires a value greater than ( > ) 0 but less than ( < ) or equal to ( = ) the
number of the last row. In the absence of this parameter, the default is the last row of
the file.

last_row can be a positive integer with a value up to 2^63-1.

-m max_errors
Specifies the maximum number of syntax errors that can occur before the bcp operation
is canceled. A syntax error implies a data conversion error to the target data type. The
max_errors total excludes any errors that can be detected only at the server, such as
constraint violations.

A row that can't be copied by the bcp utility is ignored and is counted as one error. If
this option isn't included, the default is 10.

7 Note

The -m option also does not apply to converting the money or bigint data types.

-n

Performs the bulk-copy operation using the native (database) data types of the data.
This option doesn't prompt for each field; it uses the native values.

For more information, see Use Native Format to Import or Export Data (SQL Server).

-N

Performs the bulk-copy operation using the native (database) data types of the data for
noncharacter data, and Unicode characters for character data. This option offers a
higher performance alternative to the -w option, and is intended for transferring data
from one instance of SQL Server to another using a data file. It doesn't prompt for each
field. Use this option when you are transferring data that contains ANSI extended
characters and you want to take advantage of the performance of native mode.

For more information, see Use Unicode Native Format to Import or Export Data (SQL
Server).

If you export and then import data to the same table schema by using bcp with -N , you
might see a truncation warning if there is a fixed length, non-Unicode character column
(for example, char(10)).
The warning can be ignored. One way to resolve this warning is to use -n instead of -N .

-o output_file
Specifies the name of a file that receives output redirected from the command prompt.

If output_file begins with a hyphen ( - ) or a forward slash ( / ), don't include a space


between -o and the output_file value.

-P password
Specifies the password for the login ID. If this option isn't used, the bcp command
prompts for a password. If this option is used at the end of the command prompt
without a password, bcp uses the default password (NULL).

) Important

Do not use a blank password. Use a strong password.

To mask your password, don't specify the -P option along with the -U option. Instead,
after specifying bcp along with the -U option and other switches (don't specify -P ),
press ENTER, and the command will prompt you for a password. This method ensures
that your password is masked when it is entered.

If password begins with a hyphen ( - ) or a forward slash ( / ), don't add a space between
-P and the password value.

-q
Executes the SET QUOTED_IDENTIFIERS ON statement in the connection between the
bcp utility and an instance of SQL Server. Use this option to specify a database, owner,
table, or view name that contains a space or a single quotation mark. Enclose the entire
three-part table or view name in quotation marks ("").

To specify a database name that contains a space or single quotation mark, you must
use the -q option.

-q doesn't apply to values passed to -d .

For more information, see Remarks, later in this article.


-r row_term
Specifies the row terminator. The default is \n (newline character). Use this parameter to
override the default row terminator. For more information, see Specify Field and Row
Terminators (SQL Server).

If you specify the row terminator in hexadecimal notation in a bcp command, the value
is truncated at 0x00 . For example, if you specify 0x410041 , 0x41 is used.

If row_term begins with a hyphen ( - ) or a forward slash ( / ), don't include a space


between -r and the row_term value.

-R
Specifies that currency, date, and time data is bulk copied into SQL Server using the
regional format defined for the locale setting of the client computer. By default, regional
settings are ignored.

-S server_name [\instance_name]
Specifies the instance of SQL Server to which to connect. If no server is specified, the
bcp utility connects to the default instance of SQL Server on the local computer. This
option is required when a bcp command is run from a remote computer on the network
or a local named instance. To connect to the default instance of SQL Server on a server,
specify only server_name. To connect to a named instance of SQL Server, specify
server_name**\**instance_name.

-t field_term
Specifies the field terminator. The default is \t (tab character). Use this parameter to
override the default field terminator. For more information, see Specify Field and Row
Terminators (SQL Server).

If you specify the field terminator in hexadecimal notation in a bcp command, the value
is truncated at 0x00 . For example, if you specify 0x410041 , 0x41 is used.

If field_term begins with a hyphen ( - ) or a forward slash ( / ), don't include a space


between -t and the field_term value.

-T
Specifies that the bcp utility connects to SQL Server with a trusted connection using
integrated security. The security credentials of the network user, login_id, and password
aren't required. If -T isn't specified, you need to specify -U and -P to successfully log
in.

) Important

When the bcp utility is connecting to SQL Server with a trusted connection using
integrated security, use the -T option (trusted connection) instead of the user
name and password combination. When the bcp utility is connecting to SQL
Database or Azure Synapse Analytics, using Windows authentication or Azure
Active Directory authentication is not supported. Use the -U and -P options.

-U login_id
Specifies the login ID used to connect to SQL Server.

) Important

When the bcp utility is connecting to SQL Server with a trusted connection using
integrated security, use the -T option (trusted connection) instead of the user
name and password combination. When the bcp utility is connecting to SQL
Database or Azure Synapse Analytics, using Windows authentication or Azure
Active Directory authentication is not supported. Use the -U and -P options.

-v
Reports the bcp utility version number and copyright.

-V (80 | 90 | 100 | 110 | 120 | 130 | 140 | 150 | 160)


Performs the bulk-copy operation using data types from an earlier version of SQL
Server. This option doesn't prompt for each field; it uses the default values.

80 = SQL Server 2000 (8.x)

90 = SQL Server 2005 (9.x)

100 = SQL Server 2008 (10.0.x) and SQL Server 2008 R2 (10.50.x)
110 = SQL Server 2012 (11.x)

120 = SQL Server 2014 (12.x)

130 = SQL Server 2016 (13.x)

140 = SQL Server 2017 (14.x)

150 = SQL Server 2019 (15.x)

160 = SQL Server 2022 (16.x)

For example, to generate data for types not supported by SQL Server 2000 (8.x), but
were introduced in later versions of SQL Server, use the -V80 option.

For more information, see Import Native and Character Format Data from Earlier
Versions of SQL Server.

-w
Performs the bulk copy operation using Unicode characters. This option doesn't prompt
for each field; it uses nchar as the storage type, no prefixes, \t (tab character) as the field
separator, and \n (newline character) as the row terminator. -w isn't compatible with -c .

For more information, see Use Unicode Character Format to Import or Export Data (SQL
Server).

-x

This option is used with the format and -f format_file options, and generates an XML-
based format file instead of the default non-XML format file. The -x doesn't work when
importing or exporting data. It generates an error if used without both format and -f
format_file.

Remarks
The bcp 13.0 client is installed when you install Microsoft SQL Server 2019 (15.x)
tools. If tools are installed for multiple versions of SQL Server, depending on the
order of values of the PATH environment variable, you might be using the earlier
bcp client instead of the bcp 13.0 client. This environment variable defines the set
of directories used by Windows to search for executable files. To discover which
version you are using, run the bcp -v command at the Windows Command
Prompt. For information about how to set the command path in the PATH
environment variable, see Environment Variables or search for Environment
Variables in Windows Help.

To make sure the newest version of the bcp utility is running, you need to remove
any older versions of the bcp utility.

To determine where all versions of the bcp utility are installed, type in the
command prompt:

Windows Command Prompt

where bcp.exe

The bcp utility can also be downloaded separately from the Microsoft SQL Server
2016 Feature Pack . Select either ENU\x64\MsSqlCmdLnUtils.msi or
ENU\x86\MsSqlCmdLnUtils.msi .

XML format files are only supported when SQL Server tools are installed together
with SQL Server Native Client.

For information about where to find or how to run the bcp utility and about the
command prompt utilities syntax conventions, see Command Prompt Utility
Reference (Database Engine).

For information on preparing data for bulk import or export operations, see
Prepare Data for Bulk Export or Import (SQL Server).

For information about when row-insert operations that are performed by bulk
import are logged in the transaction log, see Prerequisites for Minimal Logging in
Bulk Import.

Using additional special characters

The characters < , > , | , & , and ^ are special command shell characters, and they
must be preceded by the escape character ( ^ ), or enclosed in quotation marks
when used in String (for example, "StringContaining&Symbol" ). If you use
quotation marks to enclose a string that contains one of the special characters, the
quotation marks are set as part of the environment variable value.

Native data file support


In SQL Server, the bcp utility supports native data files compatible with SQL Server
versions starting with SQL Server 2000 (8.x) and later.
Computed columns and timestamp columns
Values in the data file being imported for computed or timestamp columns are ignored,
and SQL Server automatically assigns values. If the data file doesn't contain values for
the computed or timestamp columns in the table, use a format file to specify that the
computed or timestamp columns in the table should be skipped when importing data;
SQL Server automatically assigns values for the column.

Computed and timestamp columns are bulk copied from SQL Server to a data file as
usual.

Specify identifiers that contain spaces or


quotation marks
SQL Server identifiers can include characters such as embedded spaces and quotation
marks. Such identifiers must be treated as follows:

When you specify an identifier or file name that includes a space or quotation
mark at the command prompt, enclose the identifier in quotation marks ("").

For example, the following bcp out command creates a data file named Currency
Types.dat :

Windows Command Prompt

bcp AdventureWorks2012.Sales.Currency out "Currency Types.dat" -T -c

To specify a database name that contains a space or quotation mark, you must use
the -q option.

For owner, table, or view names that contain embedded spaces or quotation
marks, you can either:

Specify the -q option, or

Enclose the owner, table, or view name in brackets ( [] ) inside the quotation
marks.

Data validation
bcp now enforces data validation and data checks that might cause scripts to fail if
they're executed on invalid data in a data file. For example, bcp now verifies that:

The native representations of float or real data types are valid.

Unicode data has an even-byte length.

Forms of invalid data that could be bulk imported in earlier versions of SQL Server might
fail to load now; whereas, in earlier versions, the failure didn't occur until a client tried to
access the invalid data. The added validation minimizes surprises when querying the
data after bulkload.

Bulk exporting or importing SQLXML


documents
To bulk export or import SQLXML data, use one of the following data types in your
format file.

Data type Effect

SQLCHAR or The data is sent in the client code page or in the code page implied by
SQLVARYCHAR the collation). The effect is the same as specifying the -c switch without
specifying a format file.

SQLNCHAR or The data is sent as Unicode. The effect is the same as specifying the -w
SQLNVARCHAR switch without specifying a format file.

SQLBINARY or The data is sent without any conversion.


SQLVARYBIN

Permissions
A bcp out operation requires SELECT permission on the source table.

A bcp in operation minimally requires SELECT/INSERT permissions on the target table.


In addition, ALTER TABLE permission is required if any of the following conditions are
true:

Constraints exist and the CHECK_CONSTRAINTS hint isn't specified.

7 Note
Disabling constraints is the default behavior. To enable constraints explicitly,
use the -h option with the CHECK_CONSTRAINTS hint.

Triggers exist and the FIRE_TRIGGER hint isn't specified.

7 Note

By default, triggers are not fired. To fire triggers explicitly, use the -h option
with the FIRE_TRIGGERS hint.

You use the -E option to import identity values from a data file.

7 Note

Requiring ALTER TABLE permission on the target table was new in SQL Server 2005
(9.x). This new requirement might cause bcp scripts that do not enforce triggers
and constraint checks to fail if the user account lacks ALTER table permissions for
the target table.

Character mode ( -c ) and native mode ( -n )


best practices
This section has recommendations for character mode ( -c ) and native mode ( -n ).

(Administrator/User) When possible, use native format ( -n ) to avoid the separator


issue. Use the native format to export and import using SQL Server. Export data
from SQL Server using the -c or -w option if the data will be imported to a non-
SQL Server database.

(Administrator) Verify data when using BCP OUT. For example, when you use BCP
OUT, BCP IN, and then BCP OUT verify that the data is properly exported and the
terminator values aren't used as part of some data value. Consider overriding the
default terminators (using -t and -r options) with random hexadecimal values to
avoid conflicts between terminator values and data values.

(User) Use a long and unique terminator (any sequence of bytes or characters) to
minimize the possibility of a conflict with the actual string value. This can be done
by using the -t and -r options.
Examples
The examples in this section make use of the WideWorldImporters sample database for
SQL Server 2016 (13.x) and later versions, Azure SQL Database, and Azure SQL Managed
Instance. WideWorldImporters can be downloaded from
https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-
v1.0 . See RESTORE (Transact-SQL) for the syntax to restore the sample database.

Example test conditions


Except where specified otherwise, the examples assume that you use Windows
Authentication and have a trusted connection to the server instance on which you are
running the bcp command. A directory named D:\BCP is used in many of the examples.

The following script creates an empty copy of the


WideWorldImporters.Warehouse.StockItemTransactions table and then adds a primary key

constraint. Run the following T-SQL script in SQL Server Management Studio (SSMS)

SQL

USE WideWorldImporters;
GO

SET NOCOUNT ON;

IF NOT EXISTS (SELECT * FROM sys.tables WHERE name =


'Warehouse.StockItemTransactions_bcp')
BEGIN
SELECT * INTO WideWorldImporters.Warehouse.StockItemTransactions_bcp
FROM WideWorldImporters.Warehouse.StockItemTransactions
WHERE 1 = 2;

ALTER TABLE Warehouse.StockItemTransactions_bcp


ADD CONSTRAINT PK_Warehouse_StockItemTransactions_bcp PRIMARY KEY
NONCLUSTERED
(StockItemTransactionID ASC);
END

7 Note

Truncate the StockItemTransactions_bcp table as needed.

TRUNCATE TABLE WideWorldImporters.Warehouse.StockItemTransactions_bcp;


A. Identify bcp utility version
At a command prompt, enter the following command:

Windows Command Prompt

bcp -v

B. Copy table rows into a data file (with a trusted


connection)
The following examples illustrate the out option on the
WideWorldImporters.Warehouse.StockItemTransactions table.

Basic This example creates a data file named StockItemTransactions_character.bcp


and copies the table data into it using character format.

At a command prompt, enter the following command:

Windows Command Prompt

bcp WideWorldImporters.Warehouse.StockItemTransactions out


D:\BCP\StockItemTransactions_character.bcp -c -T

Expanded This example creates a data file named


StockItemTransactions_native.bcp and copies the table data into it using the

native format. The example also: specifies the maximum number of syntax errors,
an error file, and an output file.

At a command prompt, enter the following command:

Windows Command Prompt

bcp WideWorldImporters.Warehouse.StockItemTransactions OUT


D:\BCP\StockItemTransactions_native.bcp -m 1 -n -e D:\BCP\Error_out.log
-o D:\BCP\Output_out.log -S -T

Review Error_out.log and Output_out.log . Error_out.log should be blank. Compare


the file sizes between StockItemTransactions_character.bcp and
StockItemTransactions_native.bcp .
C. Copy table rows into a data file (with mixed-mode
authentication)
The following example illustrates the out option on the
WideWorldImporters.Warehouse.StockItemTransactions table. This example creates a data

file named StockItemTransactions_character.bcp and copies the table data into it using
character format.

The example assumes that you use mixed-mode authentication, and you must use the -
U switch to specify your login ID. Also, unless you are connecting to the default instance

of SQL Server on the local computer, use the -S switch to specify the system name and,
optionally, an instance name.

At a command prompt, enter the following command: (The system prompts you for
your password.)

Windows Command Prompt

bcp WideWorldImporters.Warehouse.StockItemTransactions out


D:\BCP\StockItemTransactions_character.bcp -c -U<login_id> -
S<server_name\instance_name>

D. Copy data from a file to a table


The following examples illustrate the in option on the
WideWorldImporters.Warehouse.StockItemTransactions_bcp table using files created

previously.

Basic This example uses the StockItemTransactions_character.bcp data file


previously created.

At a command prompt, enter the following command:

Windows Command Prompt

bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN
D:\BCP\StockItemTransactions_character.bcp -c -T

Expanded This example uses the StockItemTransactions_native.bcp data file


previously created. The example also: use the hint TABLOCK , specifies the batch size,
the maximum number of syntax errors, an error file, and an output file.
At a command prompt, enter the following command:

Windows Command Prompt

bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN
D:\BCP\StockItemTransactions_native.bcp -b 5000 -h "TABLOCK" -m 1 -n -e
D:\BCP\Error_in.log -o D:\BCP\Output_in.log -S -T

Review Error_in.log and Output_in.log .

E. Copy a specific column into a data file


To copy a specific column, you can use the queryout option. The following example
copies only the StockItemTransactionID column of the
Warehouse.StockItemTransactions table into a data file.

At a command prompt, enter the following command:

Windows Command Prompt

bcp "SELECT StockItemTransactionID FROM


WideWorldImporters.Warehouse.StockItemTransactions WITH (NOLOCK)" queryout
D:\BCP\StockItemTransactionID_c.bcp -c -T

F. Copy a specific row into a data file


To copy a specific row, you can use the queryout option. The following example copies
only the row for the person named Amy Trefl from the
WideWorldImporters.Application.People table into a data file Amy_Trefl_c.bcp .

7 Note

The -d switch is used identify the database.

At a command prompt, enter the following command:

Windows Command Prompt

bcp "SELECT * from Application.People WHERE FullName = 'Amy Trefl'" queryout


D:\BCP\Amy_Trefl_c.bcp -d WideWorldImporters -c -T
G. Copy data from a query to a data file
To copy the result set from a Transact-SQL statement to a data file, use the queryout
option. The following example copies the names from the
WideWorldImporters.Application.People table, ordered by full name, into the
People.txt data file.

7 Note

The -t switch is used to create a comma-delimited file.

At a command prompt, enter the following command:

Windows Command Prompt

bcp "SELECT FullName, PreferredName FROM


WideWorldImporters.Application.People ORDER BY FullName" queryout
D:\BCP\People.txt -t, -c -T

H. Create format files


The following example creates three different format files for the
Warehouse.StockItemTransactions table in the WideWorldImporters database. Review the

contents of each created file.

At a command prompt, enter the following commands:

Windows Command Prompt

REM non-XML character format


bcp WideWorldImporters.Warehouse.StockItemTransactions format nul -f
D:\BCP\StockItemTransactions_c.fmt -c -T

REM non-XML native format


bcp WideWorldImporters.Warehouse.StockItemTransactions format nul -f
D:\BCP\StockItemTransactions_n.fmt -n -T

REM XML character format


bcp WideWorldImporters.Warehouse.StockItemTransactions format nul -f
D:\BCP\StockItemTransactions_c.xml -x -c -T

7 Note
To use the -x switch, you must be using a bcp 9.0 client. For information about
how to use the bcp 9.0 client, see "Remarks."

For more information, see Non-XML Format Files (SQL Server) and XML Format Files
(SQL Server).

I. Use a format file to bulk import with bcp


To use a previously created format file when importing data into an instance of SQL
Server, use the -f switch with the in option. For example, the following command bulk
copies the contents of a data file, StockItemTransactions_character.bcp , into a copy of
the Warehouse.StockItemTransactions_bcp table by using the previously created format
file, StockItemTransactions_c.xml .

7 Note

The -L switch is used to import only the first 100 records.

At a command prompt, enter the following command:

Windows Command Prompt

bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp in
D:\BCP\StockItemTransactions_character.bcp -L 100 -f
D:\BCP\StockItemTransactions_c.xml -T

7 Note

Format files are useful when the data file fields are different from the table
columns; for example, in their number, ordering, or data types. For more
information, see Format Files for Importing or Exporting Data (SQL Server).

J. Specify a code page


The following partial code example shows bcp import while specifying a code page
65001:

Windows Command Prompt


bcp MyTable in "D:\data.csv" -T -c -C 65001 -t , ...

K. Example output file using a custom field and row


terminators
This example shows two sample files, generated by bcp using custom field and row
terminators.

1. Create a table dbo.T1 in the tempdb database, with two columns, ID and Name .

SQL

USE tempdb;
GO

CREATE TABLE dbo.T1 (ID INT, [Name] NVARCHAR(20));


GO

INSERT INTO dbo.T1 VALUES (1, N'Natalia');


INSERT INTO dbo.T1 VALUES (2, N'Mark');
INSERT INTO dbo.T1 VALUES (3, N'Randolph');
GO

2. Generate an output file from the example table dbo.T1 , using a custom field
terminator.

In this example, the server name is MYSERVER , and the custom field terminator is
specified by -t , .

Windows Command Prompt

bcp dbo.T1 out T1.txt -T -S MYSERVER -d tempdb -w -t ,

Here is the result set.

Output

1,Natalia
2,Mark
3,Randolph

3. Generate an output file from the example table dbo.T1 , using a custom field
terminator and custom row terminator.
In this example, the server name is MYSERVER , the custom field terminator is
specified by -t , , and the custom row terminator is specified by -r : .

Windows Command Prompt

bcp dbo.T1 out T1.txt -T -S MYSERVER -d tempdb -w -t , -r :

Here is the result set.

Output

1,Natalia:2,Mark:3,Randolph:

7 Note

The row terminator is always added, even to the last record. The field
terminator, however, isn't added to the last field.

Additional examples
The following articles contain examples of using bcp:

Data Formats for Bulk Import or Bulk Export (SQL Server)


Use Native Format to Import or Export Data (SQL Server)
Use Character Format to Import or Export Data (SQL Server)
Use Unicode Native Format to Import or Export Data (SQL Server)
Use Unicode Character Format to Import or Export Data (SQL Server)

Specify Field and Row Terminators (SQL Server)

Keep Nulls or Use Default Values During Bulk Import (SQL Server)

Keep Identity Values When Bulk Importing Data (SQL Server)

Format Files for Importing or Exporting Data (SQL Server)


Create a Format File (SQL Server)
Use a Format File to Bulk Import Data (SQL Server)
Use a Format File to Skip a Table Column (SQL Server)
Use a Format File to Skip a Data Field (SQL Server)
Use a Format File to Map Table Columns to Data-File Fields (SQL Server)

Examples of Bulk Import and Export of XML Documents (SQL Server)


Considerations and limitations
The bcp utility has a limitation that the error message shows only 512-byte
characters. Only the first 512 bytes of the error message are displayed.

Next steps
Prepare Data for Bulk Export or Import (SQL Server)
BULK INSERT (Transact-SQL)
OPENROWSET (Transact-SQL)
SET QUOTED_IDENTIFIER (Transact-SQL)
sp_configure (Transact-SQL)
sp_tableoption (Transact-SQL)
Format Files for Importing or Exporting Data (SQL Server)

Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback

Contribute to SQL documentation


Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.

For more information, see How to contribute to SQL Server documentation


sqlcmd utility
Article • 06/02/2023

Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance

Azure Synapse Analytics
Analytics Platform System (PDW)

The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script
files through various modes:

At the command prompt.


In Query Editor in SQLCMD mode.
In a Windows script file.
In an operating system ( cmd.exe ) job step of a SQL Server Agent job.

The utility uses ODBC to execute Transact-SQL batches.

7 Note

For SQL Server 2014 (12.x) and previous versions, see sqlcmd utility.

For using sqlcmd on Linux, see Install sqlcmd and bcp on Linux.

Download and install sqlcmd

Windows

Download Microsoft Command Line Utilities 15 for SQL Server (x64)


Download Microsoft Command Line Utilities 15 for SQL Server (x86)

The command line tools are General Availability (GA), however they're being released
with the installer package for SQL Server 2019 (15.x).

Version information
Release number: 15.0.4298.1
Build number: 15.0.4298.1
Release date: April 7, 2023

The new version of sqlcmd supports Azure Active Directory (Azure AD) authentication,
including Multi-Factor Authentication (MFA) support for Azure SQL Database, Azure
Synapse Analytics, and Always Encrypted features.

System requirements
Windows 7 through Windows 11
Windows Server 2008 through Windows Server - 2022

This component requires both the built-in Windows Installer 5 and the Microsoft ODBC
Driver 17 for SQL Server.

Linux and macOS


See Install sqlcmd and bcp on Linux for instructions to install sqlcmd on Linux and
macOS.

Check version
To check the sqlcmd version, execute the sqlcmd -? command and confirm that
15.0.4298.1, or a later version, is in use.

7 Note

You need version 13.1 or higher to support Always Encrypted ( -g ) and Azure AD
authentication ( -G ). You may have several versions of sqlcmd installed on your
computer. Be sure you are using the correct version. To determine the version,
execute sqlcmd -? .

Preinstalled

Azure Cloud Shell


You can try the sqlcmd utility from Azure Cloud Shell, as it is preinstalled by default:
Launch Cloud Shell

Azure Data Studio


To run sqlcmd statements in Azure Data Studio, select "Enable SQLCMD" from the editor
toolbar.
SQL Server Management Studio (SSMS)
To run sqlcmd statements in SSMS, select SQLCMD Mode from the top navigation
Query Menu dropdown.

) Important

SQL Server Management Studio (SSMS) uses the Microsoft .NET Framework
SqlClient for execution in regular and SQLCMD mode in Query Editor. When
sqlcmd is run from the command-line, sqlcmd uses the ODBC driver. Because
different default options may apply, you might see different behavior when you
execute the same query in SQL Server Management Studio in SQLCMD Mode and
in the sqlcmd utility.

Syntax
Console

sqlcmd

-a packet_size

-A (dedicated administrator connection)

-b (terminate batch job if there is an error)

-c batch_terminator

-C (trust the server certificate)

-d db_name

-D

-e (echo input)

-E (use trusted connection)

-f codepage | i:codepage[,o:codepage] | o:codepage[,i:codepage]

-g (enable column encryption)

-G (use Azure Active Directory for authentication)

-h rows_per_header

-H workstation_name

-i input_file

-I (enable quoted identifiers)


-j (Print raw error messages)

-k[1 | 2] (remove or replace control characters)

-K application_intent

-l login_timeout

-L[c] (list servers, optional clean output)

-m error_level

-M multisubnet_failover

-N (encrypt connection)

-o output_file

-p[1] (print statistics, optional colon format)

-P password

-q "cmdline query"

-Q "cmdline query" (and exit)

-r[0 | 1] (msgs to stderr)

-R (use client regional settings)

-s col_separator

-S [protocol:]server[instance_name][,port]

-t query_timeout

-u (unicode output file)

-U login_id

-v var = "value"

-V error_severity_level

-w screen_width

-W (remove trailing spaces)

-x (disable variable substitution)

-X[1] (disable commands, startup script, environment variables, optional


exit)

-y variable_length_type_display_width

-Y fixed_length_type_display_width

-z new_password

-Z new_password (and exit)

-? (usage)

Currently, sqlcmd doesn't require a space between the command-line option and the
value. However, in a future release, a space may be required between the command-line
option and the value.

Command-line options

Login-related options

-A
Signs in to SQL Server with a dedicated administrator connection (DAC). This kind of
connection is used to troubleshoot a server. This connection works only with server
computers that support DAC. If DAC isn't available, sqlcmd generates an error message,
and then exits. For more information about DAC, see Diagnostic Connection for
Database Administrators. The -A option isn't supported with the -G option. When
connecting to Azure SQL Database using -A , you must be an administrator on the
logical SQL server. DAC isn't available for an Azure AD administrator.

-C

This option is used by the client to configure it to implicitly trust the server certificate
without validation. This option is equivalent to the ADO.NET option
TRUSTSERVERCERTIFICATE = true .
-d db_name
Issues a USE <db_name> statement when you start sqlcmd. This option sets the sqlcmd
scripting variable SQLCMDDBNAME . This parameter specifies the initial database. The default
is your login's default-database property. If the database doesn't exist, an error message
is generated and sqlcmd exits.

-D
Interprets the server name provided to -S as a DSN instead of a hostname. For more
information, see DSN support in sqlcmd and bcp in Connecting with sqlcmd.

7 Note

The -D option is only available on Linux and macOS clients. On Windows clients, it
previously referred to a now-obsolete option which has been removed and is
ignored.

-l login_timeout
Specifies the number of seconds before a sqlcmd login to the ODBC driver times out
when you try to connect to a server. This option sets the sqlcmd scripting variable
SQLCMDLOGINTIMEOUT . The default time-out for login to sqlcmd is 8 seconds. When using

the -G option to connect to Azure SQL Database or Azure Synapse Analytics and
authenticate using Azure AD, a timeout value of at least 30 seconds is recommended.
The login time-out must be a number between 0 and 65534 . If the value supplied isn't
numeric, or doesn't fall into that range, sqlcmd generates an error message. A value of
0 specifies time-out to be infinite.

-E

Uses a trusted connection instead of using a user name and password to sign in to SQL
Server. By default, without -E specified, sqlcmd uses the trusted connection option.

The -E option ignores possible user name and password environment variable settings
such as SQLCMDPASSWORD . If the -E option is used together with the -U option or the -P
option, an error message is generated.

-g
Sets the Column Encryption setting to Enabled . For more information, see Always
Encrypted. Only master keys stored in Windows Certificate Store are supported. The -g
option requires at least sqlcmd version 13.1 . To determine your version, execute
sqlcmd -? .

-G

This option is used by the client when connecting to Azure SQL Database or Azure
Synapse Analytics to specify that the user be authenticated using Azure AD
authentication. This option sets the sqlcmd scripting variable SQLCMDUSEAAD = true . The
-G option requires at least sqlcmd version 13.1 . To determine your version, execute
sqlcmd -? . For more information, see Connecting to SQL Database or Azure Synapse

Analytics By Using Azure Active Directory Authentication. The -A option isn't supported
with the -G option.

The -G option only applies to Azure SQL Database and Azure Synapse Analytics.

Azure AD interactive authentication isn't currently supported on Linux or macOS. Azure


AD integrated authentication requires Microsoft ODBC Driver 17 for SQL Server version
17.6.1 or higher and a properly Configured Kerberos environment.

Azure Active Directory username and password

When you want to use an Azure AD user name and password, you can provide the
-G option with the user name and password, by using the -U and -P options.

Console

sqlcmd -S testsrv.database.windows.net -d Target_DB_or_DW -U


bob@contoso.com -P MyAzureADPassword -G

The -G parameter generates the following connection string in the backend:

Output

SERVER =
Target_DB_or_DW.testsrv.database.windows.net;UID=bob@contoso.com;PWD=My
AzureADPassword;AUTHENTICATION=ActiveDirectoryPassword;

Azure Active Directory integrated authentication

For Azure AD integrated authentication, provide the -G option without a user


name or password. Azure AD integrated authentication requires Microsoft ODBC
Driver 17 for SQL Server version 17.6.1 and later versions, and a properly
configured Kerberos environment.

Console

sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G

This generates the following connection string in the backend:

Output

SERVER =
Target_DB_or_DW.testsrv.database.windows.net;Authentication=ActiveDirec
toryIntegrated;Trusted_Connection=NO;

7 Note

The -E option ( Trusted_Connection ) cannot be used along with the -G


option.

Azure Active Directory interactive authentication

The Azure AD interactive authentication for Azure SQL Database and Azure
Synapse Analytics, allows you to use an interactive method supporting multi-factor
authentication. For more information, see Active Directory Interactive
Authentication.

Azure AD interactive authentication requires sqlcmd version 15.0.1000.34 and later


versions, as well as ODBC version 17.2 and later versions.

To enable interactive authentication, provide the -G option with user name ( -U )


only, without a password.

The following example exports data using Azure AD interactive mode, indicating a
username where the user represents an Azure AD account. This is the same
example used in the previous section, Azure Active Directory username and
password.

Interactive mode requires manually entering a password, or for accounts with


multi-factor authentication enabled, complete your configured MFA authentication
method.

Console
sqlcmd -S testsrv.database.windows.net -d Target_DB_or_DW -G -U
alice@aadtest.onmicrosoft.com

The previous command generates the following connection string in the backend:

Output

SERVER =
Target_DB_or_DW.testsrv.database.windows.net;UID=alice@aadtest.onmicros
oft.com;AUTHENTICATION=ActiveDirectoryInteractive

In case an Azure AD user is a domain federated user using a Windows account, the
user name required in the command-line contains its domain account (for example
joe@contoso.com ):

Console

sqlcmd -S testsrv.database.windows.net -d Target_DB_or_DW -G -U


joe@contoso.com

If guest users exist in a specific Azure AD tenant, and are part of a group that exists
in Azure SQL Database that has database permissions to execute the sqlcmd
command, their guest user alias is used (for example, keith0@adventureworks.com ).

) Important

There is a known issue when using the -G and -U option with sqlcmd, where
putting the -U option before the -G option may cause authentication to fail.
Always start with the -G option followed by the -U option.

-H workstation_name

A workstation name. This option sets the sqlcmd scripting variable SQLCMDWORKSTATION .
The workstation name is listed in the hostname column of the sys.sysprocesses catalog
view, and can be returned using the stored procedure sp_who . If this option isn't
specified, the default is the current computer name. This name can be used to identify
different sqlcmd sessions.

-j
Prints raw error messages to the screen.
-K application_intent
Declares the application workload type when connecting to a server. The only currently
supported value is ReadOnly . If -K isn't specified, sqlcmd doesn't support connectivity
to a secondary replica in an availability group. For more information, see Active
Secondaries: Readable Secondary Replica (Always On Availability Groups).

-M multisubnet_failover
Always specify -M when connecting to the availability group listener of a SQL Server
availability group or a SQL Server Failover Cluster Instance. -M provides for faster
detection of and connection to the (currently) active server. If -M isn't specified, -M is
off. For more information about Listeners, Client Connectivity, Application Failover,
Creation and Configuration of Availability Groups (SQL Server), Failover Clustering and
Always On Availability Groups (SQL Server), and Active Secondaries: Readable Secondary
Replicas(Always On Availability Groups).

-N

This option is used by the client to request an encrypted connection.

-P password

A user-specified password. Passwords are case-sensitive. If the -U option is used and


the -P option isn't used, and the SQLCMDPASSWORD environment variable hasn't been set,
sqlcmd prompts the user for a password. We don't recommend the use of a null (blank)
password, but you can specify the null password by using a pair of contiguous double-
quotation marks for the parameter value ( "" ).

) Important

Using -P should be considered insecure. Avoid giving the password on the


command line. Alternatively, use the SQLCMDPASSWORD environment variable, or
interactively input the password by omitting the -P option.

We recommend that you use a strong password.

The password prompt is displayed by printing the password prompt to the console, as
follows: Password:
User input is hidden. This means that nothing is displayed and the cursor stays in
position.

The SQLCMDPASSWORD environment variable lets you set a default password for the current
session. Therefore, passwords don't have to be hard-coded into batch files. The
following example first sets the SQLCMDPASSWORD variable at the command prompt and
then accesses the sqlcmd utility.

At the command prompt, type:

Console

SET SQLCMDPASSWORD=p@a$$w0rd

At the following command prompt, type:

Console

sqlcmd

If the user name and password combination is incorrect, an error message is generated.

7 Note

The OSQLPASSWORD environment variable has been kept for backward compatibility.
The SQLCMDPASSWORD environment variable takes precedence over the OSQLPASSWORD
environment variable. This means that sqlcmd and osql can be used next to each
other without interference. Old scripts will continue to work.

If the -P option is used with the -E option, an error message is generated.

If the -P option is followed by more than one argument, an error message is generated
and the program exits.

-S [protocol:]server[\instance_name][,port]

Specifies the instance of SQL Server to which to connect. It sets the sqlcmd scripting
variable SQLCMDSERVER .

Specify server_name to connect to the default instance of SQL Server on that server
computer. Specify server_name[\instance_name] to connect to a named instance of SQL
Server on that server computer. If no server computer is specified, sqlcmd connects to
the default instance of SQL Server on the local computer. This option is required when
you execute sqlcmd from a remote computer on the network.

protocol can be tcp (TCP/IP), lpc (shared memory), or np (named pipes).

If you don't specify a server_name[\instance_name] when you start sqlcmd, SQL Server
checks for and uses the SQLCMDSERVER environment variable.

7 Note

The OSQLSERVER environment variable has been kept for backward compatibility.
The SQLCMDSERVER environment variable takes precedence over the OSQLSERVER
environment variable. This means that sqlcmd and osql can be used next to each
other without interference. Old scripts will continue to work.

-U login_id
The login name or contained database user name. For contained database users, you
must provide the database name option ( -d ).

7 Note

The OSQLUSER environment variable has been kept for backward compatibility. The
SQLCMDUSER environment variable takes precedence over the OSQLUSER environment
variable. This means that sqlcmd and osql can be used next to each other without
interference. Old scripts will continue to work.

If you don't specify either the -U option or the -P option, sqlcmd tries to connect by
using Windows Authentication mode. Authentication is based on the Windows account
of the user who is running sqlcmd.

If the -U option is used with the -E option (described later in this article), an error
message is generated. If the -U option is followed by more than one argument, an error
message is generated and the program exits.

-z new_password

Change the password:

Console
sqlcmd -U someuser -P s0mep@ssword -z a_new_p@a$$w0rd

-Z new_password

Change the password and exit:

Console

sqlcmd -U someuser -P s0mep@ssword -Z a_new_p@a$$w0rd

Input/output options

-f codepage | i:codepage[,o:codepage] | o:codepage[,i:codepage]

Specifies the input and output code pages. The codepage number is a numeric value
that specifies an installed Windows code page.

Code-page conversion rules:

If no code pages are specified, sqlcmd uses the current code page for both input
and output files, unless the input file is a Unicode file, in which case no conversion
is required.

sqlcmd automatically recognizes both big-endian and little-endian Unicode input


files. If the -u option has been specified, the output is always little-endian
Unicode.

If no output file is specified, the output code page is the console code page. This
approach enables the output to be displayed correctly on the console.

Multiple input files are assumed to be of the same code page. Unicode and non-
Unicode input files can be mixed.

Enter chcp at the command prompt to verify the code page of cmd.exe .

-i input_file[,input_file2...]
Identifies the file that contains a batch of Transact-SQL statements or stored procedures.
Multiple files may be specified that are read and processed in order. Don't use any
spaces between file names. sqlcmd checks first to see whether all the specified files
exist. If one or more files don't exist, sqlcmd exits. The -i and the -Q / -q options are
mutually exclusive.

Path examples:

Console

-i C:\<filename>

-i \\<Server>\<Share$>\<filename>
-i "C:\Some Folder\<file name>"

File paths that contain spaces must be enclosed in quotation marks.

This option may be used more than once:

Console

sqlcmd -i <input_file1> -i <input_file2>

-o output_file
Identifies the file that receives output from sqlcmd.

If -u is specified, the output_file is stored in Unicode format. If the file name isn't valid,
an error message is generated, and sqlcmd exits. sqlcmd doesn't support concurrent
writing of multiple sqlcmd processes to the same file. The file output will be corrupted
or incorrect. The -f option is also relevant to file formats. This file is created if it doesn't
exist. A file of the same name from a prior sqlcmd session is overwritten. The file
specified here isn't the stdout file. If a stdout file is specified, this file isn't used.

Path examples:

Console

-o C:< filename>

-o \\<Server>\<Share$>\<filename>
-o "C:\Some Folder\<file name>"

File paths that contain spaces must be enclosed in quotation marks.

-r[0 | 1]
Redirects the error message output to the screen ( stderr ). If you don't specify a
parameter or if you specify 0 , only error messages that have a severity level of 11 or
higher are redirected. If you specify 1 , all error message output including PRINT is
redirected. This option has no effect if you use -o . By default, messages are sent to
stdout .

-R
Causes sqlcmd to localize numeric, currency, date, and time columns retrieved from SQL
Server based on the client's locale. By default, these columns are displayed using the
server's regional settings.

-u

Specifies that output_file is stored in Unicode format, regardless of the format of


input_file.

Query execution options

-e
Writes input scripts to the standard output device ( stdout ).

-I
Sets the SET QUOTED_IDENTIFIER connection option to ON . By default, it's set to OFF . For
more information, see SET QUOTED_IDENTIFIER (Transact-SQL).

-q "cmdline query"

Executes a query when sqlcmd starts, but doesn't exit sqlcmd when the query has
finished running. Multiple-semicolon-delimited queries can be executed. Use quotation
marks around the query, as shown in the following example.

At the command prompt, type:

Console

sqlcmd -d AdventureWorks2022 -q "SELECT FirstName, LastName FROM


Person.Person WHERE LastName LIKE 'Whi%';"

sqlcmd -d AdventureWorks2022 -q "SELECT TOP 5 FirstName FROM


Person.Person;SELECT TOP 5 LastName FROM Person.Person;"

) Important

Don't use the GO terminator in the query.

If -b is specified together with this option, sqlcmd exits on error. -b is described


elsewhere in this article.

-Q "cmdline query"

Executes a query when sqlcmd starts and then immediately exits sqlcmd. Multiple-
semicolon-delimited queries can be executed.

Use quotation marks around the query, as shown in the following example.

At the command prompt, type:

Console

sqlcmd -d AdventureWorks2022 -Q "SELECT FirstName, LastName FROM


Person.Person WHERE LastName LIKE 'Whi%';"

sqlcmd -d AdventureWorks2022 -Q "SELECT TOP 5 FirstName FROM


Person.Person;SELECT TOP 5 LastName FROM Person.Person;"

) Important

Don't use the GO terminator in the query.

If -b is specified together with this option, sqlcmd exits on error. -b is described


elsewhere in this article.

-t query_timeout

Specifies the number of seconds before a command (or Transact-SQL statement) times
out. This option sets the sqlcmd scripting variable SQLCMDSTATTIMEOUT . If a query_timeout
value isn't specified, the command doesn't time out. The query_timeout must be a
number between 1 and 65534 . If the value supplied isn't numeric or doesn't fall into
that range, sqlcmd generates an error message.
7 Note

The actual time out value may vary from the specified query_timeout value by
several seconds.

-v var = value [ var = value... ]


Creates a sqlcmd scripting variable that can be used in a sqlcmd script. Enclose the
value in quotation marks if the value contains spaces. You can specify multiple <var>="
<value>" values. If there are errors in any of the values specified, sqlcmd generates an

error message and then exits.

Console

sqlcmd -v MyVar1=something MyVar2="some thing"

sqlcmd -v MyVar1=something -v MyVar2="some thing"

-x
Causes sqlcmd to ignore scripting variables. This parameter is useful when a script
contains many INSERT statements that may contain strings that have the same format as
regular variables, such as $(<variable_name>) .

Format options

-h headers
Specifies the number of rows to print between the column headings. The default is to
print headings one time for each set of query results. This option sets the sqlcmd
scripting variable SQLCMDHEADERS . Use -1 to specify that headers not be printed. Any
value that isn't valid causes sqlcmd to generate an error message and then exit.

-k [1 | 2]
Removes all control characters, such as tabs and new line characters from the output.
This parameter preserves column formatting when data is returned. If 1 is specified, the
control characters are replaced by a single space. If 2 is specified, consecutive control
characters are replaced by a single space. -k is the same as -k1 .
-s col_separator
Specifies the column-separator character. The default is a blank space. This option sets
the sqlcmd scripting variable SQLCMDCOLSEP . To use characters that have special meaning
to the operating system, such as the ampersand ( & ) or semicolon ( ; ), enclose the
character in quotation marks ( " ). The column separator can be any 8-bit character.

-w screen_width
Specifies the screen width for output. This option sets the sqlcmd scripting variable
SQLCMDCOLWIDTH . The column width must be a number greater than 8 and less than
65536 . If the specified column width doesn't fall into that range, sqlcmd generates an

error message. The default width is 80 characters. When an output line exceeds the
specified column width, it wraps on to the next line.

-W

This option removes trailing spaces from a column. Use this option together with the -s
option when preparing data that is to be exported to another application. Can't be used
with the -y or -Y options.

-y variable_length_type_display_width

Sets the sqlcmd scripting variable SQLCMDMAXVARTYPEWIDTH . The default is 256 . It limits
the number of characters that are returned for the large variable length data types:

varchar(max)
nvarchar(max)
varbinary(max)
xml
user-defined data types (UDTs)
text
ntext
image

UDTs can be of fixed length depending on the implementation. If this length of a fixed
length UDT is shorter that display_width, the value of the UDT returned isn't affected.
However, if the length is longer than display_width, the output is truncated.

U Caution
Use the -y 0 option with extreme caution, because it may cause significant
performance issues on both the server and the network, depending on the size of
data returned.

-Y fixed_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXFIXEDTYPEWIDTH . The default is 0 (unlimited).
Limits the number of characters that are returned for the following data types:

char(n), where 1 <= n <= 8000


nchar(n), where 1 <= n <= 4000
varchar(n), where 1 <= n <= 8000
nvarchar(n), where 1 <= n <= 4000
varbinary(n), where 1 <= n <= 4000
sql_variant

Error reporting options

-b
Specifies that sqlcmd exits and returns a DOS ERRORLEVEL value when an error occurs.
The value that is returned to the ERRORLEVEL variable is 1 when the SQL Server error
message has a severity level greater than 10; otherwise, the value returned is 0 . If the -
V option has been set in addition to -b , sqlcmd won't report an error if the severity

level is lower than the values set using -V . Command prompt batch files can test the
value of ERRORLEVEL and handle the error appropriately. sqlcmd doesn't report errors for
severity level 10 (informational messages).

If the sqlcmd script contains an incorrect comment, syntax error, or is missing a scripting
variable, the ERRORLEVEL returned is 1 .

-m error_level
Controls which error messages are sent to stdout . Messages that have a severity level
greater than or equal to this level are sent. When this value is set to -1 , all messages
including informational messages, are sent. Spaces aren't allowed between the -m and
-1 . For example, -m-1 is valid, and -m -1 isn't.
This option also sets the sqlcmd scripting variable SQLCMDERRORLEVEL . This variable has a
default of 0 .

-V error_severity_level

Controls the severity level that is used to set the ERRORLEVEL variable. Error messages
that have severity levels greater than or equal to this value set ERRORLEVEL . Values that
are less than 0 are reported as 0 . Batch and CMD files can be used to test the value of
the ERRORLEVEL variable.

Miscellaneous options

-a packet_size
Requests a packet of a different size. This option sets the sqlcmd scripting variable
SQLCMDPACKETSIZE . packet_size must be a value between 512 and 32767 . The default is

4096 . A larger packet size can enhance performance for execution of scripts that have
lots of Transact-SQL statements between GO commands. You can request a larger packet
size. However, if the request is denied, sqlcmd uses the server default for packet size.

-c batch_terminator

Specifies the batch terminator. By default, commands are terminated and sent to SQL
Server by typing the word GO on a line by itself. When you reset the batch terminator,
don't use Transact-SQL reserved keywords or characters that have special meaning to
the operating system, even if they're preceded by a backslash.

-L[c]

Lists the locally configured server computers, and the names of the server computers
that are broadcasting on the network. This parameter can't be used in combination with
other parameters. The maximum number of server computers that can be listed is 3000.
If the server list is truncated because of the size of the buffer a warning message is
displayed.

7 Note

Because of the nature of broadcasting on networks, sqlcmd may not receive a


timely response from all servers. Therefore, the list of servers returned may vary for
each invocation of this option.

If the optional parameter c is specified, the output appears without the Servers:
header line, and each server line is listed without leading spaces. This presentation is
referred to as clean output. Clean output improves the processing performance of
scripting languages.

-p[1]
Prints performance statistics for every result set. The following display is an example of
the format for performance statistics:

Output

Network packet size (bytes): n

x xact[s]:

Clock Time (ms.): total t1 avg t2 (t3 xacts per sec.)

Where:

x = Number of transactions that are processed by SQL Server.

t1 = Total time for all transactions.


t2 = Average time for a single transaction.

t3 = Average number of transactions per second.

All times are in milliseconds.

If the optional parameter 1 is specified, the output format of the statistics is in colon-
separated format that can be imported easily into a spreadsheet or processed by a
script.

If the optional parameter is any value other than 1 , an error is generated and sqlcmd
exits.

-X[1]

Disables commands that might compromise system security when sqlcmd is executed
from a batch file. The disabled commands are still recognized; sqlcmd issues a warning
message and continues. If the optional parameter 1 is specified, sqlcmd generates an
error message and then exits. The following commands are disabled when the -X
option is used:

ED

!! command

If the -X option is specified, it prevents environment variables from being passed on to


sqlcmd. It also prevents the startup script specified by using the SQLCMDINI scripting
variable from being executed. For more information about sqlcmd scripting variables,
see sqlcmd - Use with scripting variables.

-?
Displays the version of sqlcmd and a syntax summary of sqlcmd options.

Remarks
Options don't have to be used in the order shown in the syntax section.

When multiple results are returned, sqlcmd prints a blank line between each result set in
a batch. In addition, the <x> rows affected message doesn't appear when it doesn't
apply to the statement executed.

To use sqlcmd interactively, type sqlcmd at the command prompt with any one or more
of the options described earlier in this article. For more information, see Use the sqlcmd
Utility

7 Note

The options -l , -Q , -Z or -i cause sqlcmd to exit after execution.

The total length of the sqlcmd command-line in the command environment (for
example cmd.exe or bash ), including all arguments and expanded variables, is
determined by the underlying operating system.

Variable precedence (low to high)


1. System-level environmental variables
2. User-level environmental variables
3. Command shell ( SET X=Y ) set at command prompt before running sqlcmd
4. sqlcmd -v X=Y
5. :Setvar X Y

7 Note

To view the environmental variables, in Control Panel, open System, and then select
the Advanced tab.

sqlcmd scripting variables


Variable Related option R/W Default

SQLCMDUSER -U R ""

SQLCMDPASSWORD -P -- ""

SQLCMDSERVER -S R "DefaultLocalInstance"

SQLCMDWORKSTATION -H R "ComputerName"

SQLCMDDBNAME -d R ""

SQLCMDLOGINTIMEOUT -l R/W "8" (seconds)

SQLCMDSTATTIMEOUT -t R/W "0" = wait indefinitely

SQLCMDHEADERS -h R/W "0"

SQLCMDCOLSEP -s R/W ""

SQLCMDCOLWIDTH -w R/W "0"

SQLCMDPACKETSIZE -a R "4096"

SQLCMDERRORLEVEL -m R/W 0

SQLCMDMAXVARTYPEWIDTH -y R/W "256"

SQLCMDMAXFIXEDTYPEWIDTH -Y R/W "0" = unlimited

SQLCMDEDITOR R/W "edit.com"

SQLCMDINI R ""

SQLCMDUSEAAD -G R/W ""

SQLCMDUSER , SQLCMDPASSWORD , and SQLCMDSERVER are set when :Connect is used.


R indicates the value can only be set one time during program initialization.

R/W indicates that the value can be modified by using the :setvar command and
subsequent commands are influenced by the new value.

sqlcmd commands
In addition to Transact-SQL statements within sqlcmd, the following commands are also
available:

GO [ count ]

:List

[:]RESET

:Error

[:]ED

:Out

[:]!!

:Perftrace

[:]QUIT

:Connect

[:]EXIT

:On Error

:r

:Help

:ServerList

:XML [ ON | OFF ]
:Setvar

:Listvar

Be aware of the following when you use sqlcmd commands:

All sqlcmd commands, except GO , must be prefixed by a colon ( : ).

) Important

To maintain backward compatibility with existing osql scripts, some of the


commands will be recognized without the colon, indicated by the : .

sqlcmd commands are recognized only if they appear at the start of a line.

All sqlcmd commands are case insensitive.

Each command must be on a separate line. A command can't be followed by a


Transact-SQL statement or another command.

Commands are executed immediately. They aren't put in the execution buffer as
Transact-SQL statements are.

Editing commands

[:]ED
Starts the text editor. This editor can be used to edit the current Transact-SQL batch, or
the last executed batch. To edit the last executed batch, the ED command must be typed
immediately after the last batch has completed execution.

The text editor is defined by the SQLCMDEDITOR environment variable. The default editor
is 'Edit'. To change the editor, set the SQLCMDEDITOR environment variable. For example,
to set the editor to Microsoft Notepad, at the command prompt, type:

SET SQLCMDEDITOR=notepad

[:]RESET

Clears the statement cache.

:List
Prints the content of the statement cache.

Variables

:Setvar <var> [ "value" ]

Defines sqlcmd scripting variables. Scripting variables have the following format:
$(VARNAME) .

Variable names are case insensitive.

Scripting variables can be set in the following ways:

Implicitly using a command-line option. For example, the -l option sets the
SQLCMDLOGINTIMEOUT sqlcmd variable.

Explicitly by using the :Setvar command.

By defining an environment variable before you run sqlcmd.

7 Note

The -X option prevents environment variables from being passed on to sqlcmd.

If a variable defined by using :Setvar and an environment variable have the same
name, the variable defined by using :Setvar takes precedence.

Variable names must not contain blank space characters.

Variable names can't have the same form as a variable expression, such as $(var) .

If the string value of the scripting variable contains blank spaces, enclose the value in
quotation marks. If a value for a scripting variable isn't specified, the scripting variable is
dropped.

:Listvar
Displays a list of the scripting variables that are currently set.

7 Note
Only scripting variables that are set by sqlcmd, and those that are set using the
:Setvar command will be displayed.

Output commands

:Error <filename> | STDERR | STDOUT

Redirect all error output to the file specified by filename, to stderr or to stdout . The
:Error command can appear multiple times in a script. By default, error output is sent

to stderr .

filename

Creates and opens a file that receives the output. If the file already exists, it is
truncated to zero bytes. If the file isn't available because of permissions or other
reasons, the output won't be switched and is sent to the last specified or default
destination.

STDERR

Switches error output to the stderr stream. If this has been redirected, the target
to which the stream has been redirected receives the error output.

STDOUT

Switches error output to the stdout stream. If this has been redirected, the target
to which the stream has been redirected receives the error output.

:Out <filename> | STDERR | STDOUT


Creates and redirects all query results to the file specified by file name, to stderr or to
stdout . By default, output is sent to stdout . If the file already exists, it is truncated to
zero bytes. The :Out command can appear multiple times in a script.

:Perftrace <filename> | STDERR | STDOUT

Creates and redirects all performance trace information to the file specified by file name,
to stderr or to stdout . By default performance trace output is sent to stdout . If the file
already exists, it is truncated to zero bytes. The :Perftrace command can appear
multiple times in a script.
Execution control commands

:On Error [ exit | ignore ]


Sets the action to be performed when an error occurs during script or batch execution.

When the exit option is used, sqlcmd exits with the appropriate error value.

When the ignore option is used, sqlcmd ignores the error and continues executing the
batch or script. By default, an error message is printed.

[:]QUIT
Causes sqlcmd to exit.

[:]EXIT [ ( statement ) ]
Lets you use the result of a SELECT statement as the return value from sqlcmd. If
numeric, the first column of the last result row is converted to a 4-byte integer (long).
MS-DOS, Linux, and macOS pass the low byte to the parent process or operating system
error level. Windows 2000 and later versions passes the whole 4-byte integer. The syntax
is :EXIT(query) .

For example:

text

:EXIT(SELECT @@ROWCOUNT)

You can also include the :EXIT parameter as part of a batch file. For example, at the
command prompt, type:

sqlcmd -Q ":EXIT(SELECT COUNT(*) FROM '%1')"

The sqlcmd utility sends everything between the parentheses ( () ) to the server. If a
system stored procedure selects a set and returns a value, only the selection is returned.
The :EXIT() statement with nothing between the parentheses executes everything
before it in the batch, and then exits without a return value.

When an incorrect query is specified, sqlcmd exits without a return value.

Here is a list of EXIT formats:


:EXIT

Doesn't execute the batch, and then quits immediately and returns no value.

:EXIT( )

Executes the batch, and then quits and returns no value.

:EXIT(query)

Executes the batch that includes the query, and then quits after it returns the
results of the query.

If RAISERROR is used within a sqlcmd script, and a state of 127 is raised, sqlcmd will quit
and return the message ID back to the client. For example:

text

RAISERROR(50001, 10, 127)

This error causes the sqlcmd script to end and return the message ID 50001 to the
client.

The return values -1 to -99 are reserved by SQL Server, and sqlcmd defines the
following additional return values:

Return value Description

-100 Error encountered prior to selecting return value.

-101 No rows found when selecting return value.

-102 Conversion error occurred when selecting return value.

GO [count]
GO signals both the end of a batch and the execution of any cached Transact-SQL

statements. The batch is executed multiple times as separate batches. You can't declare
a variable more than once in a single batch.

Miscellaneous commands

:r <filename>
Parses additional Transact-SQL statements and sqlcmd commands from the file
specified by filename into the statement cache. filename is read relative to the startup
directory in which sqlcmd was run.

If the file contains Transact-SQL statements that aren't followed by GO , you must enter
GO on the line that follows :r .

The file will be read and executed after a batch terminator is encountered. You can issue
multiple :r commands. The file may include any sqlcmd command. This includes the
batch terminator GO .

7 Note

The line count that is displayed in interactive mode will be increased by one for
every :r command encountered. The :r command will appear in the output of the
list command.

:ServerList

Lists the locally configured servers and the names of the servers broadcasting on the
network.

:Connect server_name[\instance_name] [-l timeout] [-U user_name


[-P password]]

Connects to an instance of SQL Server. Also closes the current connection.

Time-out options:

Value Behavior

0 Wait forever

n>0 Wait for n seconds

The SQLCMDSERVER scripting variable reflects the current active connection.

If timeout isn't specified, the value of the SQLCMDLOGINTIMEOUT variable is the default.

If only user_name is specified (either as an option, or as an environment variable), the


user is prompted to enter a password. Users aren't prompted if the SQLCMDUSER or
SQLCMDPASSWORD environment variables have been set. If you don't provide options or
environment variables, Windows Authentication mode is used to sign in. For example to
connect to an instance, instance1 , of SQL Server, myserver , by using integrated security
you would use the following command:

text

:connect myserver\instance1

To connect to the default instance of myserver using scripting variables, you would use
the following:

text

:setvar myusername test

:setvar myservername myserver

:connect $(myservername) $(myusername)

[:]!! command
Executes operating system commands. To execute an operating system command, start
a line with two exclamation marks ( !! ) followed by the operating system command. For
example:

text

:!! dir

7 Note

The command is executed on the computer on which sqlcmd is running.

:XML [ ON | OFF ]

For more information, see XML Output Format and JSON Output Format in this article.

:Help

Lists sqlcmd commands, together with a short description of each command.

sqlcmd file names


sqlcmd input files can be specified with the -i option or the :r command. Output files
can be specified with the -o option or the :Error , :Out and :Perftrace commands.
The following are some guidelines for working with these files:

:Error , :Out and :Perftrace should use separate filename values. If the same
filename is used, inputs from the commands may be intermixed.

If an input file that is located on a remote server is called from sqlcmd on a local
computer, and the file contains a drive file path such as :Out c:\OutputFile.txt ,
the output file is created on the local computer and not on the remote server.

Valid file paths include: C:\<filename> , \\<Server>\<Share$>\<filename> , and


"C:\Some Folder\<file name>" . If there is a space in the path, use quotation marks.

Each new sqlcmd session overwrites existing files that have the same names.

Informational messages
sqlcmd prints any informational message that is sent by the server. In the following
example, after the Transact-SQL statements are executed, an informational message is
printed.

At the command prompt, type the command:

Console

sqlcmd

At the sqlcmd prompt type:

Console

USE AdventureWorks2022;

GO

When you press ENTER , the following informational message is printed: "Changed
database context to 'AdventureWorks2022'."

Output format from Transact-SQL queries


sqlcmd first prints a column header that contains the column names specified in the
select list. The column names are separated by using the SQLCMDCOLSEP character. By
default, this is a space. If the column name is shorter than the column width, the output
is padded with spaces up to the next column.

This line is followed by a separator line that is a series of dash characters. The following
output shows an example.

Start sqlcmd. At the sqlcmd command prompt, type the query:

Console

USE AdventureWorks2022;

SELECT TOP (2) BusinessEntityID, FirstName, LastName

FROM Person.Person;

GO

When you press ENTER , the following result set is returned.

Output

BusinessEntityID FirstName LastName

---------------- ------------ ----------

285 Syed Abbas

293 Catherine Abel

(2 row(s) affected)

Although the BusinessEntityID column is only four characters wide, it has been
expanded to accommodate the longer column name. By default, output is terminated at
80 characters. This can be changed by using the -w option, or by setting the
SQLCMDCOLWIDTH scripting variable.

XML output format


XML output that is the result of a FOR XML clause is output, unformatted, in a continuous
stream.

When you expect XML output, use the following command: :XML ON .

7 Note

sqlcmd returns error messages in the usual format. The error messages are also
output in the XML text stream in XML format. By using :XML ON , sqlcmd does not
display informational messages.
To set the XML mode to off, use the following command: :XML OFF .

The GO command shouldn't appear before the :XML OFF command is issued, because
the :XML OFF command switches sqlcmd back to row-oriented output.

XML (streamed) data and rowset data can't be mixed. If the :XML ON command hasn't
been issued before a Transact-SQL statement that outputs XML streams is executed, the
output is garbled. Once the :XML ON command has been issued, you can't execute
Transact-SQL statements that output regular row sets.

7 Note

The :XML command does not support the SET STATISTICS XML statement.

JSON output format


When you expect JSON output, use the following command: :XML ON . Otherwise, the
output includes both the column name and the JSON text. This output isn't valid JSON.

To set the XML mode to off, use the following command: :XML OFF .

For more info, see XML Output Format in this article.

Use Azure AD authentication


Examples using Azure AD authentication:

Console

sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G -l 30

sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G -U bob@contoso.com


-P MyAzureADPassword -l 30

sqlcmd best practices


Use the following practices to help maximize security and efficiency.

Use integrated security.

Use -X[1] in automated environments.


Secure input and output files by using appropriate file system permissions.

To increase performance, do as much in one sqlcmd session as you can, instead of


in a series of sessions.

Set time-out values for batch or query execution higher than you expect it will take
to execute the batch or query.

Use the following practices to help maximize correctness:

Use -V16 to log any severity 16 level messages. Severity 16 messages indicate
general errors that can be corrected by the user.

Check the exit code and DOS ERRORLEVEL variable after the process has exited.
sqlcmd will return 0 normally, otherwise it sets the ERRORLEVEL as configured by -
V . In other words, ERRORLEVEL shouldn't be expected to be the same value as the

error number reported from SQL Server. The error number is a SQL Server-specific
value corresponding to the system function @@ERROR. ERRORLEVEL is a sqlcmd-
specific value to indicate why sqlcmd terminated, and its value is influenced by
specifying -b command line argument.

Using -V16 in combination with checking the exit code and DOS ERRORLEVEL can help
catch errors in automated environments, particularly quality gates before a production
release.

Next steps
Start the sqlcmd Utility
Run Transact-SQL Script Files Using sqlcmd
Use the sqlcmd Utility
Use sqlcmd with Scripting Variables
Connect to the Database Engine With sqlcmd
Edit SQLCMD Scripts with Query Editor
Manage Job Steps
Create a CmdExec Job Step
SqlPackage
Article • 05/11/2023

SqlPackage is a command-line utility that automates the following database development tasks by
exposing some of the public Data-Tier Application Framework (DacFx) APIs:

Version: Returns the build number of the SqlPackage application. Added in version 18.6.

Extract: Creates a data-tier application (.dacpac) file containing the schema or schema and user
data from a connected SQL database.

Publish: Incrementally updates a database schema to match the schema of a source .dacpac file.
If the database does not exist on the server, the publish operation creates it. Otherwise, an
existing database is updated.

Export: Exports a connected SQL database - including database schema and user data - to a
BACPAC file (.bacpac).

Import: Imports the schema and table data from a BACPAC file into a new user database.

DeployReport: Creates an XML report of the changes that would be made by a publish action.

DriftReport: Creates an XML report of the changes that have been made to a registered
database since it was last registered.

Script: Creates a Transact-SQL incremental update script that updates the schema of a target to
match the schema of a source.

The SqlPackage command line tool allows you to specify these actions along with action-specific
parameters and properties.

Download the latest version. For details about the latest release, see the release notes.

Command-Line Syntax
SqlPackage initiates the actions specified using the parameters, properties, and SQLCMD variables
specified on the command line.

Bash

SqlPackage {parameters} {properties} {SQLCMD variables}

Exit codes
SqlPackage commands return the following exit codes:

0 = success
non-zero = failure
Usage example
Further examples are available on the individual action pages.

Creating a .dacpac file of the current database schema:

Windows Command Prompt

SqlPackage /TargetFile:"C:\sqlpackageoutput\output_current_version.dacpac"
/Action:Extract /SourceServerName:"." /SourceDatabaseName:"Contoso.Database"

Parameters
Some parameters are shared between the SqlPackage actions. Below is a table summarizing the
parameters, for more information click into the specific action pages.

Parameter Short Extract Publish Export Import DeployReport DriftReport Script


Form

/AccessToken: /at x x x x x x x

/ClientId: /cid x

/DeployScriptPath: /dsp x x

/DeployReportPath: /drp x x

/Diagnostics: /d x x x x x x x

/DiagnosticsFile: /df x x x x x x x

/MaxParallelism: /mp x x x x x x x

/OutputPath: /op x x x

/OverwriteFiles: /of x x x x x x

/Profile: /pr x x x

/Properties: /p x x x x x x

/Quiet: /q x x x x x x x

/Secret: /secr x

/SourceConnectionString: /scs x x x x x

/SourceDatabaseName: /sdn x x x x x

/SourceEncryptConnection: /sec x x x x x

/SourceFile: /sf x x x x

/SourcePassword: /sp x x x x x

/SourceServerName: /ssn x x x x x
Parameter Short Extract Publish Export Import DeployReport DriftReport Script
Form

/SourceTimeout: /st x x x x x

/SourceTrustServerCertificate: /stsc x x x x x

/SourceUser: /su x x x x x

/TargetConnectionString: /tcs x x x x

/TargetDatabaseName: /tdn x x x x x

/TargetEncryptConnection: /tec x x x x x

/TargetFile: /tf x x x x

/TargetPassword: /tp x x x x x

/TargetServerName: /tsn x x x x x

/TargetTimeout: /tt x x x x x

/TargetTrustServerCertificate: /ttsc x x x x x

/TargetUser: /tu x x x x x

/TenantId: /tid x x x x x x x

/UniversalAuthentication: /ua x x x x x x x

/Variables: /v x x

Properties
SqlPackage actions support a large number of properties to modify the default behavior of an action.
For more information click into the specific action pages.

Utility commands

Version
Displays the sqlpackage version as a build number. Can be used in interactive prompts as well as in
automated pipelines.

Windows Command Prompt

SqlPackage /Version

Help
You can display SqlPackage usage information by using /? or /help:True .
Windows Command Prompt

SqlPackage /?

For parameter and property information specific to a particular action, use the help parameter in
addition to that action's parameter.

Windows Command Prompt

SqlPackage /Action:Publish /?

Authentication
SqlPackage authenticates using methods available in SqlClient. Configuring the authentication type
can be accomplished via the connection string parameters for each SqlPackage action
( /SourceConnectionString and /TargetConnectionString ) or through individual parameters for
connection properties. The following authentication methods are supported in a connection string:

SQL Server authentication


Active Directory (Windows) authentication
Azure Active Directory authentication
Username/password
Integrated authentication
Universal authentication
Managed identity
Service principal

Managed identity
In automated environments Azure Active Directory Managed identity is the recommended
authentication method. This method does not require passing credentials to SqlPackage at runtime.
The managed identity is configured for the environment where the SqlPackage action is run and the
SqlPackage action will use that identity to authenticate to Azure SQL. For more information on
configuring Managed identity for your environment, please see the Managed identity documentation.

An example connection string using system-assigned managed identity is:

Bash

Server=sampleserver.database.windows.net; Authentication=Active Directory Managed


Identity; Database=sampledatabase;

Environment variables

Connection pooling
Connection pooling can be enabled for all connections made by SqlPackage by setting the
CONNECTION_POOLING_ENABLED environment variable to True . This setting is recommended for
operations with Azure Active Directory username/password connections to avoid MSAL throttling.

Temporary files
During SqlPackage operations the table data is written to temporary files before compression or after
decompression. For large databases these temporary files can take up a significant amount of disk
space but their location can be specified. The export and extract operations include an optional
property to specify /p:TempDirectoryForTableData to override the SqlPackage's default value.

The default value is established by GetTempPath within SqlPackage.

For Windows, the following environment variables are checked in the following order and the first
path that exists is used:

1. The path specified by the TMP environment variable.


2. The path specified by the TEMP environment variable.
3. The path specified by the USERPROFILE environment variable.
4. The Windows directory.

For Linux and macOS, if the path is not specified in the TMPDIR environment variable, the default
path /tmp/ is used.

SqlPackage and database users


Contained database users are included in SqlPackage operations. However, the password portion of
the definition is set to a randomly generated string by SqlPackage, the existing value is not
transferred. It is recommended that the new user's password is reset to a secure value following the
import of a .bacpac or the deployment of a .dacpac . In an automated environment the password
values can be retrieved from a secure keystore, such as Azure Key Vault, in a step following
SqlPackage.

Usage data collection


SqlPackage contains Internet-enabled features that can collect and send anonymous feature usage
and diagnostic data to Microsoft.

SqlPackage may collect standard computer, use, and performance information that may be
transmitted to Microsoft and analyzed to improve the quality, security, and reliability of SqlPackage.

SqlPackage doesn't collect user specific or personal information. To help approximate a single user for
diagnostic purposes, SqlPackage will generate a random GUID for each computer it runs on and use
that value for all events it sends.

For details, see the Microsoft Privacy Statement , and SQL Server Privacy supplement.
Disable telemetry reporting
To disable telemetry collection and reporting, update the environment variable
DACFX_TELEMETRY_OPTOUT to true or 1 .

Support
The DacFx library and the SqlPackage CLI tool have adopted the Microsoft Modern Lifecycle Policy .
All security updates, fixes, and new features will be released only in the latest point version of the
major version. Maintaining your DacFx or SqlPackage installations to the current version helps ensure
that you will receive all applicable bug fixes in a timely manner.

Supported SQL offerings


SqlPackage and DacFx supports all supported SQL versions at time of the SqlPackage/DacFx release.
For example, a SqlPackage release on January 14th 2022 supports all supported versions of SQL in
January 14th 2022. For more on SQL support policies, see the SQL support policy.

Next steps
Learn more about SqlPackage Extract
Learn more about SqlPackage Publish
Learn more about SqlPackage Export
Learn more about SqlPackage Import
Connection modules for Microsoft SQL
Database
Article • 07/19/2023

This article provides download links to connection modules or drivers that your client
programs can use for interacting with Microsoft SQL Server, and with its twin in the
cloud Azure SQL Database. Drivers are available for a variety of programming
languages, running on the following operating systems:

Linux
macOS
Windows

OOP-to-relational mismatch:

Relational: Client programs that are written in an object-oriented programming (OOP)


language often use SQL drivers, which return queried data in a format that is more
relational than object oriented. C# using ADO.NET is one example. The OOP-relational
format mismatch sometimes makes the OOP code harder to write and understand.

ORM: Other drivers or frameworks return queried data in the OOP format, avoiding the
mismatch. These drivers work by expecting that classes have been defined to match the
data columns of particular SQL tables. The driver then performs the object-relational
mapping (ORM) to return queried data as an instance of a class. Microsoft's Entity
Framework (EF) for C#, and Hibernate for Java, are two examples.

The present article devotes separate sections to these two kinds of connection drivers.

Drivers for relational access


Language Download the SQL driver

C# ADO.NET
Microsoft.Data.SqlClient
.NET Core for: Linux-Ubuntu, macOS, Windows
Entity Framework Core
Entity Framework

C++ ODBC

OLE DB
Language Download the SQL driver

Go Go MSSQL driver, install instructions


Go download page

Java JDBC

Node.js Node.js driver, install instructions

PHP PHP

Python pyodbc, install instructions


Download ODBC

Ruby Ruby driver, install instructions


Ruby download page

Drivers for ORM access


The following table lists examples of Object Relational Mapping (ORM) frameworks that
client applications use to connect to Microsoft SQL Database.

Language ORM driver download

C# Entity Framework Core


Entity Framework (6.x or later)

Go GORM

Java Hibernate ORM

PHP Eloquent ORM, included in Laravel install

Node.js Sequelize ORM


Prisma

Python Django
SQL Server backend for Django

Ruby Ruby on Rails

Build-an-app webpages
https://aka.ms/sqldev takes you to a set of Build-an-app webpages. The webpages
provide information about numerous combinations of programming language,
operating system, and SQL connection driver. Among the information provided by the
Build-an-app webpages are the following items:
Details about how to get started from the very beginning, for each combination of
language + operating system + driver.
Instructions for installing the latest SQL connection drivers.
Code examples for each of the following items:
Object-relational code examples.
ORM code examples.
Columnstore index demonstrations for much faster performance.

First page, of Build-an-app webpages:


Menu for Java - Ubuntu, of Build-an-app webpages

Related links
Code examples for connecting to Azure SQL Database in the cloud, with Java and
other languages.
Connection modules for Microsoft SQL
Database
Article • 07/19/2023

This article provides download links to connection modules or drivers that your client
programs can use for interacting with Microsoft SQL Server, and with its twin in the
cloud Azure SQL Database. Drivers are available for a variety of programming
languages, running on the following operating systems:

Linux
macOS
Windows

OOP-to-relational mismatch:

Relational: Client programs that are written in an object-oriented programming (OOP)


language often use SQL drivers, which return queried data in a format that is more
relational than object oriented. C# using ADO.NET is one example. The OOP-relational
format mismatch sometimes makes the OOP code harder to write and understand.

ORM: Other drivers or frameworks return queried data in the OOP format, avoiding the
mismatch. These drivers work by expecting that classes have been defined to match the
data columns of particular SQL tables. The driver then performs the object-relational
mapping (ORM) to return queried data as an instance of a class. Microsoft's Entity
Framework (EF) for C#, and Hibernate for Java, are two examples.

The present article devotes separate sections to these two kinds of connection drivers.

Drivers for relational access


Language Download the SQL driver

C# ADO.NET
Microsoft.Data.SqlClient
.NET Core for: Linux-Ubuntu, macOS, Windows
Entity Framework Core
Entity Framework

C++ ODBC

OLE DB
Language Download the SQL driver

Go Go MSSQL driver, install instructions


Go download page

Java JDBC

Node.js Node.js driver, install instructions

PHP PHP

Python pyodbc, install instructions


Download ODBC

Ruby Ruby driver, install instructions


Ruby download page

Drivers for ORM access


The following table lists examples of Object Relational Mapping (ORM) frameworks that
client applications use to connect to Microsoft SQL Database.

Language ORM driver download

C# Entity Framework Core


Entity Framework (6.x or later)

Go GORM

Java Hibernate ORM

PHP Eloquent ORM, included in Laravel install

Node.js Sequelize ORM


Prisma

Python Django
SQL Server backend for Django

Ruby Ruby on Rails

Build-an-app webpages
https://aka.ms/sqldev takes you to a set of Build-an-app webpages. The webpages
provide information about numerous combinations of programming language,
operating system, and SQL connection driver. Among the information provided by the
Build-an-app webpages are the following items:
Details about how to get started from the very beginning, for each combination of
language + operating system + driver.
Instructions for installing the latest SQL connection drivers.
Code examples for each of the following items:
Object-relational code examples.
ORM code examples.
Columnstore index demonstrations for much faster performance.

First page, of Build-an-app webpages:


Menu for Java - Ubuntu, of Build-an-app webpages

Related links
Code examples for connecting to Azure SQL Database in the cloud, with Java and
other languages.
Microsoft ADO.NET for SQL Server and
Azure SQL Database
Article • 03/20/2023


Download ADO.NET

ADO.NET is the core data access technology for .NET languages. Use the
Microsoft.Data.SqlClient library or Entity Framework to access SQL Server, or providers
from other suppliers to access their stores. Use System.Data.Odbc or System.Data.OleDb
to access data from .NET languages using other data access technologies. Use
System.Data.DataSet when you need an offline data cache in client applications. It also
provides local persistence and XML capabilities that can be useful in web services.

Getting started (SQL Server)


Step 1: Configure development environment for ADO.NET development
Step 2: Create a SQL database for ADO.NET development
Step 3: Proof of concept connecting to SQL using ADO.NET
Step 4: Connect resiliently to SQL with ADO.NET

Documentation
ADO.NET Overview
Getting started with the SqlClient driver
Overview of the SqlClient driver
Data type mappings in ADO.NET
Retrieving and modifying data in ADO.NET
SQL Server and ADO.NET

Community
ADO.NET Managed Providers Forum
ADO.NET DataSet Forum

More samples
ADO.NET Code Examples
Getting Started with .NET Framework on Windows
Getting Started with .NET Core on macOS
Getting Started with .NET Core on Ubuntu
Getting Started with .NET Core on Red Hat Enterprise Linux (RHEL)
Microsoft JDBC Driver for SQL Server
Article • 03/03/2023


Download JDBC driver

In our continued commitment to interoperability, Microsoft provides a Java Database


Connectivity (JDBC) driver for use with SQL Server, Azure SQL Database, and Azure SQL
Managed Instance. The driver is available at no extra charge and provides Java database
connectivity from any Java application, application server, or Java-enabled applet. This
driver is a Type 4 JDBC driver that provides database connectivity through the standard
JDBC application program interfaces (APIs).

The Microsoft JDBC Driver for SQL Server has been tested against major application
servers such as IBM WebSphere and SAP NetWeaver.

Getting started
Step 1: Configure development environment for Java development
Step 2: Create a SQL database for Java development
Step 3: Proof of concept connecting to SQL using Java

Documentation
Getting Started
Overview
Programming Guide
Security
Performance and Reliability
Troubleshooting
Code Samples
Compliance and Legal

Community
Feedback and finding additional JDBC driver information

Download
Download Microsoft JDBC Driver for SQL Server - has additional information about
Maven projects, and more.

Samples
Sample JDBC driver applications
Getting started with Java on Windows
Getting started with Java on macOS
Getting started with Java on Ubuntu
Getting started with Java on Red Hat Enterprise Linux (RHEL)
Getting started with Java on SUSE Linux Enterprise Server (SLES)
Node.js Driver for SQL Server
Article • 11/18/2022


Download Node.js SQL driver

The tedious module is a JavaScript implementation of the TDS protocol, which is


supported by all modern versions of SQL Server. The driver is an open-source project,
available on GitHub.

You can connect to a SQL Database using Node.js on Windows, Linux, or macOS.

Get started
Step 1: Configure development environment for Node.js development
Step 2: Create a SQL database for Node.js development
Step 3: Proof of concept connecting to SQL using Node.js

Documentation
Tedious module documentation on GitHub

Support
Tedious for Node.js is community-supported software. Microsoft contributes to the
tedious open-source community and is an active participant in the repository at

https://github.com/tediousjs/tedious . However, this software doesn't come with


Microsoft support.

To get help, file an issue in the tedious GitHub repository or visit other Node.js
community resources.

Community resources
Azure Node.js Developer Center
Get Involved at nodejs.org

Code examples
Getting Started with Node.js on Windows
Getting Started with Node.js on macOS
Getting Started with Node.js on Ubuntu
Getting Started with Node.js on Red Hat Enterprise Linux (RHEL)
Getting Started with Node.js on SUSE Linux Enterprise Server (SLES)
Microsoft ODBC Driver for SQL Server
Article • 06/15/2023

Version: 18.2.2.1

Date: June 15, 2023


Download ODBC driver

ODBC is the primary native data access API for applications written in C and C++ for
SQL Server. There's an ODBC driver for most data sources. Other languages that can use
ODBC include COBOL, Perl, PHP, and Python. ODBC is widely used in data integration
scenarios.

The ODBC driver comes with tools such as sqlcmd and bcp. The sqlcmd utility lets you
run Transact-SQL statements, system procedures, and SQL scripts. The bcp utility bulk
copies data between an instance of Microsoft SQL Server and a data file in a format you
choose. You can use bcp to import many new rows into SQL Server tables or to export
data out of tables into data files.

Code example in C++


The following sample demonstrates how to use the ODBC APIs to connect to and access
a database:

C++ code example, using ODBC

Download

Download ODBC driver

Documentation

Features
Connection Resiliency
Custom Keystore Providers
Data Classification
DSN and Connection String Keywords and Attributes
SQL Server Native Client (the features available also apply, without OLEDB, to the
ODBC Driver for SQL Server)
Using Always Encrypted
Using Azure Active Directory
Using Transparent Network IP Resolution
Using XA Transactions

Linux and macOS


Installing the driver on Linux
Installing the driver on macOS
Connecting to SQL Server
Connecting with bcp
Connecting with sqlcmd
Data Access Tracing
Frequently Asked Questions
Installing the Driver Manager
Known Issues
Programming Guidelines
Release Notes
Release Notes (mssql-tools)
Support for High Availability and Disaster Recovery
Using Integrated Authentication (Kerberos)

Windows
Asynchronous Execution (Notification Method) Sample
Driver-Aware Connection Pooling
Features and Behavior Changes
Release Notes for ODBC to SQL Server on Windows
System Requirements, Installation, and Driver Files

Community
SQL Server Drivers blog
SQL Server Data Access Forum
Microsoft Drivers for PHP for SQL
Server
Article • 11/18/2022


Download PHP driver

The Microsoft Drivers for PHP for SQL Server enable integration with SQL Server for PHP
applications. The drivers are PHP extensions that allow the reading and writing of SQL
Server data from within PHP scripts. The drivers provide interfaces for accessing data in
Azure SQL Database and in all editions of SQL Server 2005 and later (including Express
Editions). The drivers make use of PHP features, including PHP streams, to read and
write large objects.

Getting Started
Step 1: Configure development environment for PHP development
Step 2: Create a database for PHP development
Step 3: Proof of concept connecting to SQL using PHP
Step 4: Connect resiliently to SQL with PHP

Documentation
Getting Started
Overview
Programming Guide
Security Considerations

Community
Support Resources for the Microsoft Drivers for PHP for SQL Server

Download

Download drivers for PHP for SQL

Samples
Code Samples for the Microsoft Drivers for PHP for SQL Server
Getting Started with PHP on Windows
Getting Started with PHP on macOS
Getting Started with PHP on Ubuntu
Getting Started with PHP on Red Hat Enterprise Linux (RHEL)
Getting Started with PHP on SUSE Linux Enterprise Server (SLES)
Python SQL driver
Article • 11/18/2022


Install SQL driver for Python

You can connect to a SQL Database using Python on Windows, Linux, or macOS.

Getting started
There are several python SQL drivers available. However, Microsoft places its testing
efforts and its confidence in pyodbc driver. Choose one of the following drivers, and
configure your development environment:

Python SQL driver - pyodbc


Python SQL driver - pymssql

Documentation
For documentation, see Python documentation at Python.org .

Community
Azure Python Developer Center
python.org Community

Next steps
Explore samples that use Python to connect to a SQL database in the following articles:

Create a Python app in Azure App Service on Linux


Getting Started with Python on Windows
Getting Started with Python on macOS
Getting Started with Python on Ubuntu
Getting Started with Python on Red Hat Enterprise Linux (RHEL)
Getting Started with Python on SUSE Linux Enterprise Server (SLES)
Ruby Driver for SQL Server
Article • 11/18/2022


Download Ruby driver for SQL

You can connect to a SQL Database using Ruby on Windows, Linux, or macOS.

Get started
Step 1: Configure development environment for Ruby development
Step 2: Create a SQL database for Ruby development
Step 3: Proof of concept connecting to SQL using Ruby

Documentation
Documentation at ruby-lang.org

Support
Ruby and tiny_tds are community-supported software. This software doesn't come with
Microsoft support. To get help, visit the community resources.

Community resources
Azure Ruby Developer Center

Samples
Getting Started with Ruby on macOS
Getting Started with Ruby on Ubuntu
Getting Started with Ruby on Red Hat Enterprise Linux (RHEL)
Public data sets for testing and
prototyping
Article • 03/16/2023

Applies to:
Azure SQL Database
Azure SQL Managed Instance
SQL Server
on Azure VM

Browse this list of public data sets for data that you can use to prototype and test
storage and analytics services and solutions.

U.S. Government and agency data


Data source About the data About the files

US Over 250,000 data sets covering agriculture, Files of various sizes in


Government climate, consumer, ecosystems, education, energy, various formats including
data finance, health, local government, manufacturing, HTML, XML, CSV, JSON,
maritime, ocean, public safety, and science and Excel, and many others.
research in the U.S. You can filter available data
sets by file format.

US Census Statistical data about the population of the U.S. Data sets are in various
data formats.

Earth science Over 32,000 data collections covering agriculture, Data sets are in various
data from atmosphere, biosphere, climate, cryosphere, formats.
NASA human dimensions, hydrosphere, land surface,
oceans, sun-earth interactions, and more.

Airline flight "The U.S. Department of Transportation's (DOT) Files are in CSV format.
delays and Bureau of Transportation Statistics (BTS) tracks the
other on-time performance of domestic flights operated
transportation by large air carriers. Summary information on the
data number of on-time, delayed, canceled, and
diverted flights appears ... in summary tables
posted on this website."

Traffic "FARS is a nationwide census providing NHTSA, "Create your own fatality
fatalities - US Congress, and the American public yearly data data run online by using
Fatality regarding fatal injuries suffered in motor vehicle the FARS Query System. Or
Analysis traffic crashes." download all FARS data
Reporting from 1975 to present from
System the FTP Site."
(FARS)
Data source About the data About the files

Toxic chemical "EPA's most updated, publicly available high- Data sets are available in
data - EPA throughput toxicity data on thousands of various formats including
Toxicity chemicals. This data is generated through the spreadsheets, R packages,
ForeCaster EPA's ToxCast research effort." and MySQL database files.
(ToxCast™)
data

Toxic chemical "The 2014 Tox21 data challenge is designed to Data sets are available in
data - NIH help scientists understand the potential of the SMILES and SDF formats.
Tox21 Data chemicals and compounds being tested through The data provides "assay
Challenge the Toxicology in the 21st Century initiative to activity data and chemical
2014 disrupt biological pathways in ways that may result structures on the Tox21
in toxic effects." collection of ~10,000
compounds (Tox21 10K)."

Biotechnology Multiple data sets covering genes, genomes, and Data sets are in text, XML,
and genome proteins. BLAST, and other formats.
data from the A BLAST app is available.
NCBI

Other statistical and scientific data


Data source About the data About the
files

New York City "Taxi trip records include fields capturing pick-up and dropoff Data sets are
taxi data dates/times, pick-up and dropoff locations, trip distances, in CSV files by
itemized fares, rate types, payment types, and driver- month.
reported passenger counts."

Microsoft Multiple data sets covering human-computer interaction, Data sets are
Research data audio/video, data mining/information retrieval, in various
sets - "Data geospatial/location, natural language processing, and formats,
Science for robotics/computer vision. zipped for
Research" download.

Open Science "The Open Science Data Cloud provides the scientific Data sets are
Data Cloud community with resources for storing, sharing, and analyzing in various
data terabyte and petabyte-scale scientific datasets." formats.

Global climate "WorldClim is a set of global climate layers (gridded climate These files
data - data) with a spatial resolution of about 1 km2. These data can contain
WorldClim be used for mapping and spatial modeling." geospatial
data.
Data source About the data About the
files

Data about "The GDELT Project is the largest, most comprehensive, and The raw data
human society - highest resolution open database of human society ever files are in
The GDELT created." CSV format.
Project

Advertising click "The largest ever publicly released ML dataset." For more
prediction data info, see Criteo's 1 TB Click Prediction Dataset.
for machine
learning from
Criteo

ClueWeb09 text "The ClueWeb09 dataset was created to support research on See Dataset
mining data set information retrieval and related human language Information .
from The Lemur technologies. It consists of about 1 billion web pages in 10
Project languages that were collected in January and February 2009."

Online service data


Data About the data About the files
source

GitHub "GitHub Archive is a project to record the Download JSON-encoded event


archive public GitHub timeline [of events], archive it, archives in .gz (Gzip) format from a
and make it easily accessible for further web client.
analysis."

GitHub "The GHTorrent project [is] an effort to MySQL database dumps are in CSV
activity create a scalable, queryable, offline mirror of format.
data from data offered through the GitHub REST API.
The GHTorrent monitors the GitHub public event
GHTorrent time line. For each event, it retrieves its
project contents and their dependencies,
exhaustively."

Stack "This is an anonymized dump of all user- "Each site [such as Stack Overflow] is
Overflow contributed content on the Stack Exchange formatted as a separate archive
data network [including Stack Overflow]." consisting of XML files zipped via 7-
dump zip using bzip2 compression. Each
site archive includes Posts, Users,
Votes, Comments, PostHistory, and
PostLinks."
What's new in SQL Server on Azure
VMs? (Archive)
Article • 03/15/2023

Applies to:
SQL Server on Azure VM

This article summarizes older documentation changes associated with new features and
improvements in the recent releases of SQL Server on Azure VMs . To learn more
about SQL Server on Azure VMs, see the overview.

Return to What's new in SQL Server on Azure VMs?

2021
Changes Details

Deployment It's now possible to configure the following options when deploying your SQL
configuration Server VM from an Azure Marketplace image: System database location, number
improvements of tempdb data files, collation, max degree of parallelism, min and max server
memory settings, and optimize for ad hoc workloads. Review Deploy SQL Server
VM to learn more.

Automated The possible maximum automated backup retention period has changed from
backup 30 days to 90, and you're now able to choose a specific container within the
improvements storage account. Review automated backup to learn more.

Tempdb You can now modify tempdb settings directly from the SQL virtual machines
configuration blade in the Azure portal, such as increasing the size, and adding data files.

Eliminate Deploy your SQL Server VMs to multiple subnets to eliminate the dependency
need for on the Azure Load Balancer or distributed network name (DNN) to route traffic
HADR Azure to your high availability / disaster recovery (HADR) solution! See the multi-
Load Balancer subnet availability group tutorial, or prepare SQL Server VM for FCI article to
or DNN learn more.

SQL It's now possible to assess the health of your SQL Server VM in the Azure portal
Assessment using SQL Assessment to surface recommendations that improve performance,
and identify missing best practices configurations. This feature is currently in
preview.

SQL IaaS Support has been added to register your SQL Server VM running on Ubuntu
Agent Linux with the SQL Server IaaS Extension for limited functionality.
extension now
supports
Ubuntu
Changes Details

SQL IaaS Restarting the SQL Server service is no longer necessary when registering your
Agent SQL Server VM with the SQL IaaS Agent extension!
extension full
mode no
longer
requires
restart

Repair SQL It's now possible to verify the status of your SQL Server IaaS Agent extension
Server IaaS directly from the Azure portal, and repair it, if necessary.
extension in
portal

Security Once you've enabled Microsoft Defender for SQL, you can view Security Center
enhancements recommendations in the SQL virtual machines resource in the Azure portal.
in the Azure
portal

HADR content We've refreshed and enhanced our high availability and disaster recovery
refresh (HADR) content! There's now an Overview of the Windows Server Failover
Cluster, as well as a consolidated how-to configure quorum for SQL Server VMs.
Additionally, we've enhanced the cluster best practices with more
comprehensive setting recommendations adopted to the cloud.

Migrate high Azure Migrate brings support to lift and shift your entire high availability
availability to solution to SQL Server on Azure VMs! Bring your availability group or your
VM failover cluster instance to SQL Server VMs using Azure Migrate today!

Performance We've rewritten, refreshed, and updated the performance best practices
best practices documentation, splitting one article into a series that contains: a checklist, VM
refresh size guidance, Storage guidance, and collecting baseline instructions.

2020
Changes Details

Azure It's now possible to register SQL Server virtual machines with the SQL IaaS Agent
Government extension for virtual machines hosted in the Azure Government cloud.
support

Azure SQL SQL Server on Azure Virtual Machines is now a part of the Azure SQL family of
family products. Check out our new look! Nothing has changed in the product, but the
documentation aims to make the Azure SQL product decision easier.
Changes Details

Distributed SQL Server 2019 on Windows Server 2016+ is now previewing support for routing
network traffic to your failover cluster instance (FCI) by using a distributed network name
name (DNN) rather than using Azure Load Balancer. This support simplifies and streamlines
connecting to your high-availability (HA) solution in Azure.

FCI with It's now possible to deploy your failover cluster instance (FCI) by using Azure
Azure shared disks.
shared disks

Reorganized The documentation around failover cluster instances with SQL Server on Azure
FCI docs VMs has been rewritten and reorganized for clarity. We've separated some of the
configuration content, like the cluster configuration best practices, how to prepare
a virtual machine for a SQL Server FCI, and how to configure Azure Load Balancer.

Migrate log Learn how you can migrate your log file to an ultra disk to leverage high
to ultra disk performance and low latency.

Create It's now possible to simplify the creation of an availability group by using Azure
availability PowerShell as well as the Azure CLI.
group using
Azure
PowerShell

Configure It's now possible to configure your availability group via the Azure portal. This
availability feature is currently in preview and being deployed so if your desired region is
group in unavailable, check back soon.
portal

Automatic You can now enable the Automatic registration feature to automatically register all
extension SQL Server VMs already deployed to your subscription with the SQL IaaS Agent
registration extension. This applies to all existing VMs, and will also automatically register all
SQL Server VMs added in the future.

DNN for You can now configure a distributed network name (DNN) listener) for SQL Server
availability 2019 CU8 and later to replace the traditional VNN listener, negating the need for
group an Azure Load Balancer.

2019
Changes Details

Free DR replica in You can host a free passive instance for disaster recovery in Azure for your
Azure on-premises SQL Server instance if you have Software Assurance .

Bulk SQL IaaS You can now bulk register SQL Server virtual machines with the SQL IaaS
Agent extension Agent extension.
registration
Changes Details

Performance- You can now fully customize your storage configuration when creating a new
optimized SQL Server VM.
storage
configuration

Premium file You can now create a failover cluster instance by using a Premium file share
share for FCI instead of the original method of Storage Spaces Direct.

Azure Dedicated You can run your SQL Server VM on Azure Dedicated Host.
Host

SQL Server VM Use Azure Site Recovery to migrate your SQL Server VM from one region to
migration to a another.
different region

New SQL IaaS It's now possible to install the SQL Server IaaS extension in lightweight mode
installation to avoid restarting the SQL Server service.
modes

SQL Server You can now change the edition property for your SQL Server VM.
edition
modification

Changes to the You can register your SQL Server VM with the SQL IaaS Agent extension by
SQL IaaS Agent using the new SQL IaaS modes. This capability includes Windows Server 2008
extension images.

Bring-your-own- Bring-your-own-license images deployed from Azure Marketplace can now


license images switch their license type to pay-as-you-go.
using Azure
Hybrid Benefit

New SQL Server There's now a way to manage your SQL Server VM in the Azure portal. For
VM management more information, see Manage SQL Server VMs in the Azure portal.
in the Azure
portal

Extended support Extend support for SQL Server 2008 and SQL Server 2008 R2 by migrating as
for SQL Server is to an Azure VM.
2008 and 2008
R2

Custom image You can now install the SQL Server IaaS extension to custom OS and SQL
supportability Server images, which offers the limited functionality of flexible licensing.
When you're registering your custom image with the SQL IaaS Agent
extension, specify the license type as "AHUB." Otherwise, the registration will
fail.
Changes Details

Named instance You can now use the SQL Server IaaS extension with a named instance, if the
supportability default instance has been uninstalled properly.

Portal The Azure portal experience for deploying a SQL Server VM has been
enhancement revamped to improve usability. For more information, see the brief quickstart
and more thorough how-to guide to deploy a SQL Server VM.

Portal It's now possible to change the licensing model for a SQL Server VM from
improvement pay-as-you-go to bring-your-own-license by using the Azure portal.

Simplification of It's now easier than ever to deploy an availability group to a SQL Server VM
availability group in Azure. You can use the Azure CLI to create the Windows failover cluster,
deployment to a internal load balancer, and availability group listeners, all from the command
SQL Server VM line. For more information, see Use the Azure CLI to configure an Always On
through the availability group for SQL Server on an Azure VM.
Azure CLI

   

2018
Changes Details

New resource A new resource provider


provider for a SQL (Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups) defines the
Server cluster metadata of the Windows failover cluster. Joining a SQL Server VM to
SqlVirtualMachineGroups bootstraps the Windows Server Failover Cluster
(WSFC) service and joins the VM to the cluster.

Automated setup It's now possible to create the Windows failover cluster, join SQL Server
of an availability VMs to it, create the listener, and configure the internal load balancer by
group deployment using two Azure Quickstart Templates. For more information, see Use Azure
with Azure Quickstart Templates to configure an Always On availability group for SQL
Quickstart Server on an Azure VM.
Templates

Automatic SQL Server VMs deployed after this month are automatically registered with
registration to the the new SQL IaaS Agent extension. SQL Server VMs deployed before this
SQL IaaS Agent month still need to be manually registered. For more information, see
extension Register a SQL Server virtual machine in Azure with the SQL IaaS Agent
extension.

New SQL IaaS A new resource provider (Microsoft.SqlVirtualMachine) provides better


Agent extension management of your SQL Server VMs. For more information on registering
your VMs, see Register a SQL Server virtual machine in Azure with the SQL
IaaS Agent extension.
Changes Details

Switch licensing You can now switch between the pay-per-usage and bring-your-own-
model license models for your SQL Server VM by using the Azure CLI or
PowerShell. For more information, see How to change the licensing model
for a SQL Server virtual machine in Azure.

   

Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.

Additional resources
Windows VMs:

Overview of SQL Server on a Windows VM


Provision SQL Server on a Windows VM
Migrate a database to SQL Server on an Azure VM
High availability and disaster recovery for SQL Server on Azure Virtual Machines
Performance best practices for SQL Server on Azure Virtual Machines
Application patterns and development strategies for SQL Server on Azure Virtual
Machines

Linux VMs:

Overview of SQL Server on a Linux VM


Provision SQL Server on a Linux virtual machine
FAQ (Linux)
SQL Server on Linux documentation
Resolve capacity errors with Azure SQL
Database or Azure SQL Managed
Instance
Article • 08/30/2022

Applies to:
Azure SQL Database
Azure SQL Managed Instance

In this article, learn how to resolve capacity errors when deploying Azure SQL Database
or Azure SQL Managed Instance resources.

Exceeded quota
If you encounter any of the following errors when attempting to deploy your Azure SQL
resource, please request to increase your quota:

Server quota limit has been reached for this location. Please select a

different location with lower server count.

Could not perform the operation because server would exceed the allowed
Database Throughput Unit quota of xx.

During a scale operation, you may see the following error:

Could not perform the operation because server would exceed the allowed

Database Throughput Unit quota of xx. .

Subscription access
Your subscription may not have access to create a server in the selected region if your
subscription has not been registered with the SQL resource provider (RP).

If you see the following errors, please register your subscription with the SQL RP:

Your subscription does not have access to create a server in the selected
region.

Provisioning is restricted in this region. Please choose a different region.

For exceptions to this rule please open a support request with issue type of
'Service and subscription limits'

Location 'region name' is not accepting creation of new Windows Azure SQL
Database servers for the subscription 'subscription id' at this time
Enable region
Your subscription may not have access to create a server in the selected region if that
region has not been enabled. To resolve this, file a support request to enable a specific
region for your subscription.

If you see the following errors, file a support ticket to enable a specific region:

Your subscription does not have access to create a server in the selected
region.

Provisioning is restricted in this region. Please choose a different region.

For exceptions to this rule please open a support request with issue type of
'Service and subscription limits'

Location 'region name' is not accepting creation of new Windows Azure SQL
Database servers for the subscription 'subscription id' at this time

Register with SQL RP


To deploy Azure SQL resources, register your subscription with the SQL resource
provider (RP).

You can register your subscription using the Azure portal, the Azure CLI, or Azure
PowerShell.

Azure portal

To register your subscription in the Azure portal, follow these steps:

1. Open the Azure portal and go to All Services.

2. Go to Subscriptions and select the subscription of interest.

3. On the Subscriptions page, select Resource providers under Settings.

4. Enter sql in the filter to bring up the SQL-related extensions.

5. Select Register, Re-register, or Unregister for the Microsoft.Sql provider,


depending on your desired action.
Additional provisioning issues
If you're still experiencing provisioning issues, please open a Region access request
under the support topic of SQL Database and specify the DTU or vCores you want to
consume on Azure SQL Database or Azure SQL Managed Instance.

Azure Program regions


Azure Program offerings (Azure Pass, Imagine, Azure for Students, MPN, BizSpark,
BizSpark Plus, Microsoft for Startups / Sponsorship Offers, Visual Studio Subscriptions /
MSDN) have access to a limited set of regions.

If your subscription is part of an Azure Program offering, and you would like to request
access to any of the following regions, please consider using an alternate region instead:

Australia Central, Australia Central 2, Australia SouthEast, Brazil SouthEast, Canada East,
China East, China North, China North 2, France South, Germany North, Japan West, JIO
India Central, JIO India West, Korea South, Norway West, South Africa West, South India,
Switzerland West, UAE Central , UK West, US DoD Central, US DoD East, US Gov Arizona,
US Gov Texas, West Central US, West India.

Next steps
After you submit your request, it will be reviewed. You will be contacted with an answer
based on the information you provided in the form.

For more information about other Azure limits, see Azure subscription and service limits,
quotas, and constraints.
Understanding the changes in the Root
CA change for Azure SQL Database &
SQL Managed Instance
Article • 02/24/2023

Azure SQL Database & SQL Managed Instance will be changing the root certificate for
the client application/driver enabled with SSL, used to establish secure TDS connection.
The current root certificate is set to expire October 26, 2020 as part of standard
maintenance and security best practices. This article gives you more details about the
upcoming changes, the resources that will be affected, and the steps needed to ensure
that your application maintains connectivity to your database server.

What update is going to happen?


Certificate Authority (CA) Browser forum recently published reports of multiple
certificates issued by CA vendors to be non-compliant.

As per the industry's compliance requirements, CA vendors began revoking CA


certificates for non-compliant CAs, requiring servers to use certificates issued by
compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure SQL
Database & SQL Managed Instance currently use one of these non-compliant
certificates, which client applications use to validate their SSL connections, we need to
ensure that appropriate actions are taken (described below) to minimize the potential
impact to your Azure SQL servers.

The new certificate will be used starting October 26, 2020. If you use full validation of
the server certificate when connecting from a SQL client (TrustServerCertificate=false),
you need to ensure that your SQL client would be able to validate new root certificate
before October 26, 2020.

How do I know if my application might be


affected?
All applications that use SSL/TLS and verify the root certificate needs to update the root
certificate in order to connect to Azure SQL Database & SQL Managed Instance.

If you are not using SSL/TLS currently, there is no impact to your application availability.
You can verify if your client application is trying to verify root certificate by looking at
the connection string. If TrustServerCertificate is explicitly set to true then you are not
affected.

If your client driver utilizes OS certificate store, as majority of drivers do, and your OS is
regularly maintained this change will likely not affect you, as the root certificate we are
switching to should be already available in your Trusted Root Certificate Store. Check for
Baltimore CyberDigiCert GlobalRoot G2 and validate it is present.

If your client driver utilizes local file certificate store, to avoid your application's
availability being interrupted due to certificates being unexpectedly revoked, or to
update a certificate, which has been revoked, refer to the What do I need to do to
maintain connectivity section.

What do I need to do to maintain connectivity


To avoid your application's availability being interrupted due to certificates being
unexpectedly revoked, or to update a certificate, which has been revoked, follow the
steps below:

Download Baltimore CyberTrust Root & DigiCert GlobalRoot G2 Root CA from links
below:
https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem

Generate a combined CA certificate store with both BaltimoreCyberTrustRoot and


DigiCertGlobalRootG2 certificates are included.

What can be the impact?


If you are validating server certificates as documented here, your application's
availability might be interrupted since the database will not be reachable. Depending on
your application, you may receive a variety of error messages including but not limited
to:

Invalid certificate/revoked certificate


Connection timed out
Error if applicable

Frequently asked questions


If I am not using SSL/TLS, do I still need to update the
root CA?
No actions regarding this change are required if you are not using SSL/TLS. Still you
should set a plan for start using latest TLS version as we plan for TLS enforcement in
near future.

What will happen if I do not update the root certificate


before October 26, 2020?
If you do not update the root certificate before November 30, 2020, your applications
that connect via SSL/TLS and does verification for the root certificate will be unable to
communicate to the Azure SQL Database & SQL Managed Instance and application will
experience connectivity issues to your Azure SQL Database & SQL Managed Instance.

Do I need to plan a maintenance downtime for this


change?

No. Since the change here is only on the client side to connect to the server, there is no
maintenance downtime needed here for this change.

What if I cannot get a scheduled downtime for this


change before October 26, 2020?
Since the clients used for connecting to the server needs to be updating the certificate
information as described in the fix section here, we do not need to a downtime for the
server in this case.

If I create a new server after November 30, 2020, will I be


impacted?
For servers created after October 26, 2020, you can use the newly issued certificate for
your applications to connect using SSL.

How often does Microsoft update their certificates or


what is the expiry policy?
These certificates used by Azure SQL Database & SQL Managed Instance are provided
by trusted Certificate Authorities (CA). So the support of these certificates on Azure SQL
Database & SQL Managed Instance is tied to the support of these certificates by CA.
However, as in this case, there can be unforeseen bugs in these predefined certificates,
which need to be fixed at the earliest.

If I am using read replicas, do I need to perform this


update only on primary server or all the read replicas?
Since this update is a client-side change, if the client used to read data from the replica
server, we will need to apply the changes for those clients as well.

Do we have server-side query to verify if SSL is being


used?
Since this configuration is client-side, information is not available on server side.

What if I have further questions?


If you have a support plan and you need technical help, create Azure support request,
see How to create Azure support request.
Azure Architecture Center
Guidance for architecting solutions on Azure using established patterns and practices.

ARCHITECTURE CONCEPT
Browse Azure architectures Explore cloud best practices

HOW-TO GUIDE W H AT ' S N E W


Assess, optimize, and review See what's new
your workload

Architecting applications on Azure


Best practices and patterns for building applications on Microsoft Azure

Design for the cloud Optimize your workload


Principles of a well-designed application Guiding tenets for your architecture
Best practices in cloud applications Examine your workload
Responsible engineering Performance tuning
Application design patterns Performance antipatterns
Architect multitenant solutions on Azure Secure your infrastructure
Build microservices on Azure

Choose the right technology Essential scenarios


Choose a compute service Architecture for startups
Choose a Kubernetes at the edge option Azure and Power Platform solutions
Choose a data store Azure and Microsoft 365 solutions
Choose a load-balancing service AWS services comparison
Choose a messaging service Google Cloud services comparison
Choose an IoT solution

Technology Areas
Explore architectures and guides for different technologies

Popular Articles AI & Machine Learning


i AKS Production Baseline p Artificial intelligence (AI) architecture design
i AWS to Azure services comparison Y Training of Python scikit-learn models
i Google Cloud to Azure services comparison Y Distributed training of deep learning models
p Cloud Design Patterns Y Batch scoring of Python models
p CQRS design pattern Y Conversational bot
p Best practices in cloud applications i Machine learning options
i Web API design p Machine learning at scale
p Performance antipatterns for cloud applications p Natural language processing
i Choose your Azure compute service b Movie recommendation
i Application architecture fundamentals p Cognitive services options
Y Hub-spoke network topology p Team Data Science Process
p Architect multitenant solutions on Azure
See more T

Analytics Databases
b Analytics architecture design b Databases architecture design
p Choose an analytical data store in Azure p Big Data architectures
p Choose a data analytics technology in Azure p Build a scalable system for massive data
Y Analytics end-to-end with Azure Synapse p Choose a data store
Y Automated enterprise BI with Azure Data p Extract, transform, and load (ETL)
Factory
p Online analytical processing (OLAP)
Y Stream processing with Azure Databricks p Online transaction processing (OLTP)
` Databricks Monitoring p Data warehousing in Microsoft Azure
Y Advanced analytics architecture
b Data lakes
b IoT analytics for construction

Y Real-time fraud detection b Extend on-premises data solutions to the cloud
Y Mining equipment monitoring b Free-form text search
Y Predict the length of stay in hospitals b Time series solutions

See more T

DevOps Enterprise integration


p Checklist Y Basic enterprise integration
c Advanced Azure Resource Manager Templates Y Enterprise BI with SQL Data Warehouse
b DevOps with Azure DevOps Y Enterprise integration with queues and events
b DevOps with containers
b Jenkins server

High performance computing (HPC) Identity


b Computational fluid dynamics (CFD) c Identity in multitenant applications
b Computer-aided engineering Y Choose an Active Directory integration
architecture
b HPC video rendering
b Image Modeling Y Integrate on-premises AD with Azure AD

b Linux virtual desktops Y Extend AD DS to Azure


Y Create an AD DS forest in Azure
Introduction to HPC on Azure T
Y Extend AD FS to Azure

Internet of Things (IoT) Microservices


Y Internet of Things (IoT) architecture c Domain analysis
b Automotive IoT data c Tactical DDD
b Telehealth System c Identify microservice boundaries
c Design a microservices architecture
c Monitor microservices in Azure Kubernetes
Service (AKS)
c CI/CD for microservices
c CI/CD for microservices on Kubernetes
c Migrate from Cloud Services to Service Fabric
Y Azure Kubernetes Service (AKS)
Y Azure Service Fabric
b Decomposing a monolithic application

Introduction to microservices on Azure T


Networking Serverless applications
Y Choose a hybrid network architecture c Code walkthrough
Y ExpressRoute Y Serverless event processing
Y ExpressRoute with VPN failover Y Serverless web application
Y Troubleshoot a hybrid VPN connection
Introduction to Serverless Applications on Azure T
Y Hub-spoke topology
Y DMZ between Azure and on-premises
Y DMZ between Azure and the Internet
Y Highly available network virtual appliances
Y Segmenting Virtual Networks

VM workloads SAP
Y Linux VM deployment Y Overview
Y Windows VM deployment Y SAP HANA on Azure (Large Instances)
Y N-tier application with Cassandra (Linux) Y SAP HANA Scale-up on Linux
Y N-tier application with SQL Server (Windows) Y SAP NetWeaver on Windows on Azure
Y Multi-region N-tier application Y SAP S/4HANA in Linux on Azure
b Highly scalable WordPress Y SAP BW/4HANA in Linux on Azure
b Multi-tier Windows Y SAP NetWeaver on SQL Server
Y SAP deployment using an Oracle DB
Y Dev/test for SAP

Web apps
Y Basic web application
Y Baseline zone-redundant web application
Y Multi-region deployment
Y Web application monitoring
b E-commerce API management
b E-commerce front-end
b E-commerce product search
b Publishing internal APIs externally
b Securely managed web application
b Highly available web application
Build your skills with Microsoft Learn training

Build great solutions with Introduction to the Well Azure Fundamentals


the Microsoft Azure Well- Architected Framework
Architected Framework

Security, responsibility, Architect infrastructure Tour the N-tier


and trust in Azure operations in Azure architecture style

You might also like