Professional Documents
Culture Documents
h WHAT'S NEW
What's new?
e OVERVIEW
f QUICKSTART
Configure firewall
q VIDEO
p CONCEPT
Advanced security
Business continuity
h WHAT'S NEW
What's new?
e OVERVIEW
f QUICKSTART
Configure VM to connect
q VIDEO
p CONCEPT
Advanced security
Business continuity
e OVERVIEW
What's new?
f QUICKSTART
p CONCEPT
Security considerations
Performance guidelines
d TRAINING
e OVERVIEW
` DEPLOY
Reference
` DEPLOY
Azure CLI samples
PowerShell samples
a DOWNLOAD
i REFERENCE
Migration guide
Transact-SQL (T-SQL)
Azure CLI
PowerShell
REST API
f QUICKSTART
Overview
Azure portal
.NET Core
Python
e OVERVIEW
Application development
c HOW-TO GUIDE
Applies to:
Azure SQL Database
Azure SQL Managed Instance
SQL Server
on Azure VM
Azure SQL is a family of managed, secure, and intelligent products that use the SQL
Server database engine in the Azure cloud.
Azure SQL is built upon the familiar SQL Server engine, so you can migrate applications
with ease and continue to use the tools, languages, and resources you're familiar with.
Your skills and experience transfer to the cloud, so you can do even more with what you
already have.
Learn how each product fits into Microsoft's Azure SQL data platform to match the right
option for your business requirements. Whether you prioritize cost savings or minimal
administration, this article can help you decide which approach delivers against the
business requirements you care about most.
If you're new to Azure SQL, check out the What is Azure SQL video from our in-depth
Azure SQL video series:
https://learn.microsoft.com/shows/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-
61/player
Overview
In today's data-driven world, driving digital transformation increasingly depends on our
ability to manage massive amounts of data and harness its potential. But today's data
estates are increasingly complex, with data hosted on-premises, in the cloud, or at the
edge of the network. Developers who are building intelligent and immersive
applications can find themselves constrained by limitations that can ultimately impact
their experience. Limitations arising from incompatible platforms, inadequate data
security, insufficient resources and price-performance barriers create complexity that
can inhibit app modernization and development.
One of the first things to understand in any discussion of Azure versus on-premises SQL
Server databases is that you can use it all. Microsoft's data platform leverages SQL
Server technology and makes it available across physical on-premises machines, private
cloud environments, third-party hosted private cloud environments, and the public
cloud.
Remediate potential threats in real time with intelligent advanced threat detection
and proactive vulnerability assessment alerts.
Get industry-leading, multi-layered protection with built-in security controls
including T-SQL, authentication, networking, and key management.
Take advantage of the most comprehensive compliance coverage of any cloud
database service.
Business motivations
There are several factors that can influence your decision to choose between the
different data offerings:
Cost: Both platform as a service (PaaS) and infrastructure as a service (IaaS) options
include base price that covers underlying infrastructure and licensing. However,
with the IaaS option you need to invest additional time and resources to manage
your database, while in PaaS you get administration features included in the price.
IaaS enables you to shut down resources while you aren't using them to decrease
the cost, while PaaS is always running unless you drop and re-create your
resources when they're needed.
Administration: PaaS options reduce the amount of time that you need to invest to
administer the database. However, it also limits the range of custom administration
tasks and scripts that you can perform or run. For example, the CLR isn't supported
with SQL Database, but is supported for an instance of SQL Managed Instance.
Also, no deployment options in PaaS support the use of trace flags.
Service-level agreement: Both IaaS and PaaS provide high, industry standard SLA.
PaaS option guarantees 99.99% SLA, while IaaS guarantees 99.95% SLA for
infrastructure, meaning that you need to implement additional mechanisms to
ensure availability of your databases. You can attain 99.99% SLA by creating an
additional SQL virtual machine, and implementing the SQL Server Always On
availability group high availability solution.
Time to move to Azure: SQL Server on Azure VM is the exact match of your
environment, so migration from on-premises to the Azure VM is no different than
moving the databases from one on-premises server to another. SQL Managed
Instance also enables easy migration; however, there might be some changes that
you need to apply before your migration.
Service comparison
As seen in the diagram, each service offering can be characterized by the level of
administration you have over the infrastructure, and by the degree of cost efficiency.
In Azure, you can have your SQL Server workloads running as a hosted service (PaaS ),
or a hosted infrastructure (IaaS ) supporting the software layer, such as Software-as-a-
Service (SaaS) or an application. Within PaaS, you have multiple product options, and
service tiers within each option. The key question that you need to ask when deciding
between PaaS or IaaS is - do you want to manage your database, apply patches, and
take backups - or do you want to delegate these operations to Azure?
Best for modern cloud applications that want to use the latest stable SQL Server
features and have time constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise
Edition of SQL Server. SQL Database has two deployment options built on
standardized hardware and software that is owned, hosted, and maintained by
Microsoft.
With SQL Server, you can use built-in features and functionality that requires extensive
configuration (either on-premises or in an Azure virtual machine). When using SQL
Database, you pay-as-you-go with options to scale up or out for greater power with no
interruption. SQL Database has some additional features that aren't available in SQL
Server, such as built-in high availability, intelligence, and management.
As a single database with its own set of resources managed via a logical SQL server.
A single database is similar to a contained database in SQL Server. This option is
optimized for modern application development of new cloud-born applications.
Hyperscale and serverless options are available.
An elastic pool, which is a collection of databases with a shared set of resources
managed via a logical server. Single databases can be moved into and out of an
elastic pool. This option is optimized for modern application development of new
cloud-born applications using the multi-tenant SaaS application pattern. Elastic
pools provide a cost-effective solution for managing the performance of multiple
databases that have variable usage patterns.
7 Note
Best for new applications or existing on-premises applications that want to use the
latest stable SQL Server features and that are migrated to the cloud with minimal
changes. An instance of SQL Managed Instance is similar to an instance of the
Microsoft SQL Server database engine offering shared resources for databases and
additional instance-scoped features.
SQL Managed Instance supports database migration from on-premises with
minimal to no database change. This option provides all of the PaaS benefits of
Azure SQL Database but adds capabilities that were previously only available in
SQL Server VMs. This includes a native virtual network and near 100% compatibility
with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL
Server access and feature compatibility for migrating SQL Servers to Azure.
SQL Server installed and hosted in the cloud runs on Windows Server or Linux
virtual machines running on Azure, also known as an infrastructure as a service
(IaaS). SQL virtual machines are a good option for migrating on-premises SQL
Server databases and applications without any database change. All recent
versions and editions of SQL Server are available for installation in an IaaS virtual
machine.
Best for migrations and applications requiring OS-level access. SQL virtual
machines in Azure are lift-and-shift ready for existing applications that require fast
migration to the cloud with minimal changes or no changes. SQL virtual machines
offer full administrative control over the SQL Server instance and underlying OS for
migration to Azure.
The most significant difference from SQL Database and SQL Managed Instance is
that SQL Server on Azure Virtual Machines allows full control over the database
engine. You can choose when to start maintenance activities including system
updates, change the recovery model to simple or bulk-logged, pause or start the
service when needed, and you can fully customize the SQL Server database engine.
With this additional control comes the added responsibility to manage the virtual
machine.
Rapid development and test scenarios when you don't want to buy on-premises
hardware for SQL Server. SQL virtual machines also run on standardized hardware
that is owned, hosted, and maintained by Microsoft. When using SQL virtual
machines, you can either pay-as-you-go for a SQL Server license already included
in a SQL Server image or easily use an existing license. You can also stop or resume
the VM as needed.
Optimized for migrating existing applications to Azure or extending existing on-
premises applications to the cloud in hybrid deployments. In addition, you can use
SQL Server in a virtual machine to develop and test traditional SQL Server
applications. With SQL virtual machines, you have the full administrative rights over
a dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when
an organization already has IT resources available to maintain the virtual machines.
These capabilities allow you to build a highly customized system to address your
application's specific performance and availability requirements.
Comparison table
Additional differences are listed in the following table, but both SQL Database and SQL
Managed Instance are optimized to reduce overall management costs to a minimum for
provisioning and managing many databases. Ongoing administration costs are reduced
since you don't have to manage any virtual machines, operating system, or database
software. You don't have to manage upgrades, high availability, or backups.
In general, SQL Database and SQL Managed Instance can dramatically increase the
number of databases managed by a single IT or development resource. Elastic pools
also support SaaS multi-tenant application architectures with features including tenant
isolation and the ability to scale to reduce costs by sharing resources across databases.
SQL Managed Instance provides support for instance-scoped features enabling easy
migration of existing applications, as well as sharing resources among databases.
Whereas SQL Server on Azure VMs provide DBAs with an experience most similar to the
on-premises environment they're familiar with.
Supports most Supports almost all You have full control over the SQL Server engine.
on-premises on-premises Supports all on-premises capabilities.
capabilities. The database-level Full parity with the matching version of on-premises
most commonly capabilities. High SQL Server.
used SQL Server compatibility with Fixed, well-known Database Engine version.
available.
99.99% availability Private IP address within Azure Virtual Network.
99.995% guaranteed.
You have the ability to deploy application or services
availability Built-in backups, on the host where SQL Server is placed.
guaranteed.
patching, recovery.
version.
Private IP address
Ability to assign within Azure Virtual
necessary Network.
databases.
Online change of
Built-in resources
advanced (CPU/storage).
intelligence and
security.
Online change of
resources
(CPU/storage).
Azure SQL Azure SQL SQL Server on Azure VM
Database Managed Instance
Migration from There's still some You may use manual or automated backups.
SQL Server minimal number of You need to implement your own High-Availability
might be SQL Server features solution.
challenging.
that aren't available.
There's a downtime while changing the
Some SQL Server Configurable resources(CPU/storage)
features aren't maintenance
available.
windows.
Private IP
address support
with Azure
Private Link.
On-premises Native virtual With SQL virtual machines, you can have applications
application can network that run partly in the cloud and partly on-premises. For
access data in implementation and example, you can extend your on-premises network
Azure SQL connectivity to your and Active Directory Domain to the cloud via Azure
Database. on-premises Virtual Network. For more information on hybrid cloud
environment using solutions, see Extending on-premises data solutions to
Azure Express Route the cloud.
or VPN Gateway.
Cost
Whether you're a startup that is strapped for cash, or a team in an established company
that operates under tight budget constraints, limited funding is often the primary driver
when deciding how to host your databases. In this section, you learn about the billing
and licensing basics in Azure associated with the Azure SQL family of services. You also
learn about calculating the total application cost.
Billing and licensing basics
Currently, both SQL Database and SQL Managed Instance are sold as a service and are
available with several options and in several service tiers with different prices for
resources, all of which are billed hourly at a fixed rate based on the service tier and
compute size you choose. For the latest information on the current supported service
tiers, compute sizes, and storage amounts, see DTU-based purchasing model for SQL
Database and vCore-based purchasing model for both SQL Database and SQL Managed
Instance.
With SQL Database, you can choose a service tier that fits your needs from a wide
range of prices starting from 5$/month for basic tier and you can create elastic
pools to share resources among databases to reduce costs and accommodate
usage spikes.
With SQL Managed Instance, you can also bring your own license. For more
information on bring-your-own licensing, see License Mobility through Software
Assurance on Azure or use the Azure Hybrid Benefit calculator to see how to
save up to 40%.
In addition, you're billed for outgoing Internet traffic at regular data transfer rates . You
can dynamically adjust service tiers and compute sizes to match your application's
varied throughput needs.
With SQL Database and SQL Managed Instance, the database software is automatically
configured, patched, and upgraded by Azure, which reduces your administration costs.
In addition, its built-in backup capabilities help you achieve significant cost savings,
especially when you have a large number of databases.
With SQL on Azure VMs, you can use any of the platform-provided SQL Server images
(which includes a license) or bring your SQL Server license. All the supported SQL Server
versions (2008R2, 2012, 2014, 2016, 2017, 2019) and editions (Developer, Express, Web,
Standard, Enterprise) are available. In addition, Bring-Your-Own-License versions (BYOL)
of the images are available. When using the Azure provided images, the operational cost
depends on the VM size and the edition of SQL Server you choose. Regardless of VM
size or SQL Server edition, you pay per-minute licensing cost of SQL Server and the
Windows or Linux Server, along with the Azure Storage cost for the VM disks. The per-
minute billing option allows you to use SQL Server for as long as you need without
buying addition SQL Server licenses. If you bring your own SQL Server license to Azure,
you are charged for server and storage costs only. For more information on bring-your-
own licensing, see License Mobility through Software Assurance on Azure . In addition,
you are billed for outgoing Internet traffic at regular data transfer rates .
Calculating the total application cost
When you start using a cloud platform, the cost of running your application includes the
cost for new development and ongoing administration costs, plus the public cloud
platform service costs.
Administration
For many businesses, the decision to transition to a cloud service is as much about
offloading complexity of administration as it's cost. With IaaS and PaaS, Azure
administers the underlying infrastructure and automatically replicates all data to provide
disaster recovery, configures and upgrades the database software, manages load
balancing, and does transparent failover if there's a server failure within a data center.
With SQL Database and SQL Managed Instance, you can continue to administer
your database, but you no longer need to manage the database engine, the
operating system, or the hardware. Examples of items you can continue to
administer include databases and logins, index and query tuning, and auditing and
security. Additionally, configuring high availability to another data center requires
minimal configuration and administration.
With SQL on Azure VM, you have full control over the operating system and SQL
Server instance configuration. With a VM, it's up to you to decide when to
update/upgrade the operating system and database software and when to install
any additional software such as anti-virus. Some automated features are provided
to dramatically simplify patching, backup, and high availability. In addition, you can
control the size of the VM, the number of disks, and their storage configurations.
Azure allows you to change the size of a VM as needed. For information, see
Virtual Machine and Cloud Service Sizes for Azure.
For SQL on Azure VM, Microsoft provides an availability SLA of 99.95% for two virtual
machines in an availability set, or 99.99% for two virtual machines in different availability
zones. This means that at least one of the two virtual machines will be available for the
given SLA, but it does not cover the processes (such as SQL Server) running on the VM.
For the latest information, see the VM SLA . For database high availability (HA) within
VMs, you should configure one of the supported high availability options in SQL Server,
such as Always On availability groups. Using a supported high availability option doesn't
provide an additional SLA, but allows you to achieve >99.99% database availability.
Azure SQL Managed Instance greatly simplifies the migration of existing applications to
Azure, enabling you to bring migrated database applications to market in Azure quickly.
SQL on Azure VM is perfect if your existing or new applications require large databases
or access to all features in SQL Server or Windows/Linux, and you want to avoid the time
and expense of acquiring new on-premises hardware. It's also a good fit when you want
to migrate existing on-premises applications and databases to Azure as-is - in cases
where SQL Database or SQL Managed Instance isn't a good fit. Since you don't need to
change the presentation, application, and data layers, you save time and budget on
rearchitecting your existing solution. Instead, you can focus on migrating all your
solutions to Azure and in doing some performance optimizations that may be required
by the Azure platform. For more information, see Performance Best Practices for SQL
Server on Azure Virtual Machines.
To access the Azure SQL page, from the Azure portal menu, select Azure SQL or search
for and select Azure SQL in any page.
7 Note
Azure SQL provides a quick and easy way to access all of your SQL resources in the
Azure portal, including single and pooled databases in Azure SQL Database as well
as the logical server hosting them, Azure SQL Managed Instances, and SQL Server
on Azure VMs. Azure SQL is not a service or resource, but rather a family of SQL-
related services.
To manage existing resources, select the desired item in the list. To create new Azure
SQL resources, select + Create.
After selecting + Create, view additional information about the different options by
selecting Show details on any tile.
Next steps
See Your first Azure SQL Database to get started with SQL Database.
See Your first Azure SQL Managed Instance to get started with SQL Managed
Instance.
See SQL Database pricing .
See Azure SQL Managed Instance pricing .
See Provision a SQL Server virtual machine in Azure to get started with SQL Server
on Azure VMs.
Identify the right SQL Database or SQL Managed Instance SKU for your on-
premises database.
Migrate to Azure SQL
Find documentation on how to migrate to the Azure SQL family of SQL Server database
engine products in the cloud: Azure SQL Database, Azure SQL Managed Instance, and
SQL Server on Azure VM.
b GET STARTED
Overview
From Access
From DB2
From Oracle
From MySQL
b GET STARTED
Overview
From DB2
From Oracle
b GET STARTED
Overview
From DB2
From Oracle
Migration tools
` DEPLOY
Azure Migrate
Transactional replication
Bulk copy
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
SQL Server on Azure VM
Migrating on-premises SQL Server workloads and associated applications to the cloud
usually brings a wide range of questions which go beyond mere product feature
information.
This article provides a holistic view and helps understand how to fully unlock the value
when migrating to Azure SQL. The Modernize applications and SQL section covers
questions about Azure SQL in general as well as common application and SQL
modernization scenarios. The Business and technical evaluation section covers cost
saving, licensing, minimizing migration risk, business continuity, security, workloads and
architecture, performance and similar business and technical evaluation questions. The
last section covers the actual Migration and modernization process, including guidance
on migration tools.
Azure SQL
Azure SQL is a family of services that use the SQL Server database engine in the Azure
Cloud. The following services belong to Azure SQL: Azure SQL Database (SQL Database),
Azure SQL Managed Instance (SQL Managed Instance) and SQL Server on Azure VMs.
PaaS provides additional advantages over IaaS, such as shorter development cycles,
extra development capabilities without adding staff, affordable access to sophisticated
tools, to mention a few. Azure SQL provides both PaaS (SQL Managed Instance, SQL
Database) and IaaS (SQL VM) services.
SQL Managed Instance is the right PaaS target to modernize your existing SQL
Server applications at scale providing almost all SQL Server features (including
instance-level features) while reducing the costs of server and database
management.
SQL Database is the most appropriate choice when building native cloud
applications, as it offers high elasticity and flexibility of choosing between
architectural and compute tiers, such as Serverless tier for increased elasticity and
Hyperscale tier for a highly scalable storage and compute resources.
If you need full control and customization, including OS access, you can opt for
SQL Server on Azure VM. The service comparison provides more details. A range
of migration tools helps making the optimal choice by providing an assessment of
target service compatibility and costs.
Moving to Azure brings savings in resource, maintenance, and real estate costs, in
addition to the ability to optimize workloads so that they cost less to run. Azure SQL
Managed Instance and SQL Database bring all the advantages of PaaS services,
providing automated performance tuning, backups, software patching and high-
availability, all of which entails enormous effort and cost when performing manually.
For example, SQL Managed Instance and SQL Database (single database and elastic
pool) come with built-in HA. Also, Business Critical (SQL Managed Instance) and
Premium (SQL Database) tiers provide read-only replicas at no additional cost, while
SQL Database Hyperscale tier allows HA and named secondary replicas for read scale-
out at no license cost. Additionally, Software Assurance customers can use their on-
premises SQL Server license on Azure by applying Azure Hybrid Benefit (AHB).
Software Assurance also lets you implement free passive HA and DR secondaries using
SQL VM.
In addition, every Azure SQL service provides you the option to reserve instances in
advance (1-3 years) and obtain significant additional savings. Dev/Test pricing plans
provide a way to further reduce development costs. Finally, check the following article
on how you can Optimize your Azure SQL Managed Instance cost with Microsoft Azure
Well-Architected Framework .
What is the best licensing path to save costs when moving existing
SQL Server workloads to Azure?
Unique to Azure, Azure Hybrid Benefit (AHB) is a licensing benefit that allows you
bringing your existing Windows Server and SQL Server licenses with Software Assurance
(SA) to Azure. Combined with reservations savings and extended security updates, AHB
can bring you up to 85% savings compared to pay-as-you-go pricing in Azure SQL. In
addition, make sure to check different Dev/Test pricing plans .
Scenario 2: Reducing SQL Server costs: How can I reduce the cost
for my existing SQL Server fleet?
Moving to Azure SQL VMs, SQL Managed Instance or SQL Database brings savings in
resource, maintenance, and real estate costs. Using your SQL Server on-premises
licenses in Azure via Azure Hybrid Benefit , using Azure Reservations for SQL VM, SQL
Managed Instance and SQL Database vCores, and using constrained vCPU capable
Virtual Machines will give you a wide variety of options to build a cost-effective solution.
For implementing BCDR solutions in Azure SQL, you benefit from built-in HA replicas of
SQL Managed Instance and SQL Database or free passive HA and DR secondaries using
SQL VM. Also, Business Critical (SQL Managed Instance) and Premium (SQL Database)
tiers provide read-only replicas at no additional cost, while SQL Database Hyperscale tier
allows HA and named secondary replicas for read scale-out at no license cost. In
addition, make sure to check different Dev/Test pricing plans .
If you're interested to understand how you can save up to 64% by moving to Azure SQL
please check ESG report on The Economic Value of Migrating On-Premises SQL Server
Instances to Microsoft Azure SQL Solutions . Finally, check the following article on how
you can Optimize your Azure SQL Managed Instance cost with Microsoft Azure Well-
Architected Framework .
Application and Data Modernization in Azure is achieved through several stages, with
the most common scenario examples described within the Cloud Adoption Framework.
Whenever a Platform-as-a-Service (PaaS) solution doesn't fit your workload, Azure SQL
Virtual Machines provide the possibility to do an as-is migration. By moving to Azure
SQL VM, you'll also receive free extended security patches which can provide significant
savings (for example, up to 69% for SQL Server 2012).
Azure Policy has built-in policies that help organizations meet regulatory compliance. Ad
hoc and customized policies can also be created. For more information, see Azure Policy
Regulatory Compliance controls for Azure SQL Database and SQL Managed Instance.
For an overview of compliance offerings, you can consult Azure compliance
documentation.
The Microsoft Cloud Adoption Framework for Azure is a great starting point to help
you create and implement the business and technology strategy necessary for your
move to Azure.
Can I modernize SQL Server to SQL Managed Instance and just lift
and shift my application to a VM?
Yes. You can Connect your application to Azure SQL Managed Instance through different
scenarios, including when hosting it on a VM.
Moving to Azure SQL brings significant TCO savings by improving operational efficiency
and business agility, as well as eliminating the need for on-premises hardware and
software. According to ESG report on The Economic Value of Migrating On-Premises
SQL Server Instances to Microsoft Azure SQL Solutions , you can save up to 47% when
migrating from on-premises to Azure SQL Virtual Machines (IaaS), and up to 64% when
migrating to Azure SQL Managed Instance or Azure SQL Database (PaaS).
SQL Managed Instance licensing follows vCore-based licensing model, where you pay
for compute, storage, and backup storage resources. You can choose between several
service tiers (General Purpose, Business Critical) and hardware generations. The SQL
Managed Instance pricing page provides a full overview of possible SKUs and prices.
If you own Software Assurance for core-based or qualifying subscription licenses for SQL
Server Standard Edition or SQL Server Enterprise Edition, you can use your existing SQL
Server license when moving to SQL Managed Instance, SQL Database or Azure VM by
applying Azure Hybrid Benefit (AHB). You can also simultaneously use these licenses
both in on-premises and Azure environments (dual use rights) for up to 180 days.
Yes, qualifying subscription licenses can be used to pay Azure SQL services at a reduced
(base) rate by applying Azure Hybrid Benefit (AHB).
I'm using SQL Server CAL licenses. How can I move to Azure SQL?
SQL Server CAL licenses with appropriate license mobility rights can be used on Azure
SQL VMs, and on Azure SQL Dedicated Host.
Both General Purpose and Business Critical tiers of SQL Managed Instance and SQL
Database are built on top of inherent high availability architecture. This way, there's no
extra charge for HA. For SQL Database Hyperscale tier HA replica is charged.
Can I centrally manage Azure Hybrid Benefit for SQL Server across
the entire Azure subscription?
Yes. You can centrally manage your Azure Hybrid Benefit for SQL Server across the scope
of an entire Azure subscription or overall billing account. This feature is currently in
preview.
For how long can I keep the hybrid solution using Link feature for
Azure SQL Managed Instance running?
You can keep running the hybrid link for as long as needed: weeks, months, years at a
time, there are no restrictions on this.
Can I apply a hybrid approach and use Link feature for Azure SQL
Managed Instance in order to validate my migration strategy,
before migrating to Azure?
Yes, you can use your replicated data in Azure to test and validate your migration
strategy (performance, workloads and applications) prior to migrating to Azure.
Can I reverse migrate out of Azure SQL and go back to SQL Server
if necessary?
With SQL Server 2022, we offer the best possible solution to seamlessly move data back
with native backup and restore from SQL Managed Instance to SQL Server, completely
de-risking the migrations strategy to Azure.
You can use the Azure SQL Migration extension in Azure Data Studio or Data Migration
Assistant. Both tools provide help to detect issues that can affect the Azure SQL
Managed Instance migration and provide guidance on how to resolve them. After
verifying compatibility, you can run the SKU recommendation tool to analyze
performance data and recommend a minimal Azure SQL Managed Instance SKU. Make
sure to visit Azure Migrate which is a centralized hub to assess and migrate on-premises
servers, infrastructure, applications, and data to Azure.
SQL Managed Instance tier choice is guided by availability, performance (for example,
throughput, OIPS, latency) and feature (for example, in-memory OLTP) requirements.
The General Purpose tier is suitable for most generic workloads, as it already provides
HA architecture and a fully managed database engine with a storage latency between 5
ms and 10 ms. The Business Critical tier is designed for applications that require low-
latency (1-2 ms) responses from the storage layer, fast recovery, strict availability
requirements, and the ability to off-load analytics workloads.
Infrastructure deployment automation of Azure SQL can be done with PowerShell and
CLI. Useful examples can be found in the Azure PowerShell samples for Azure SQL
Database and Azure SQL Managed Instance article. You can use Azure DevOps
Continuous Integration (CI) and Deployment (CD) Pipelines to fully embed automation
within your Infrastructure-as-Code practices.
Building your database models and scripts can also be integrated through Database
Projects with Visual Studio Code or Visual Studio. The use of Azure DevOps CI/CD
pipelines will enable deployment of your Database Projects to an Azure SQL
destination of your choice. Finally, service automation via third party tools is also
possible. For more information, see Azure SQL Managed Instance – Terraform
command .
Database compatibility level can be set in Managed Instance, as described on the Azure
SQL Blog .
Security
The security strategy follows the layered defense-in-depth approach: Network security +
Access management + Threat protection + Information Protection. You can read more
about SQL Database and SQL Managed Instance security capabilities. Azure-wide,
Microsoft Defender for Cloud provides a solution for Cloud Security Posture
Management (SCPM) and Cloud Workload Protection (CWP).
Business continuity
How can I adapt on-premises business continuity and disaster
recovery (BCDR) concepts into Azure SQL Managed Instance
concepts?
Most Azure SQL BCDR concepts have an equivalent in on-premises SQL Server
implementations. For example, the inherent high availability of SQL Managed Instance
General Purpose tier can be seen as a cloud equivalent for SQL Server FCI. Similarly,
SQL Managed Instance Business Critical tier can be seen as a cloud equivalent for an
Always On Availability Group with synchronous commit to a minimum number of
replicas. As a Disaster Recovery concept, an Auto-failover Group on SQL Managed
Instance is comparable to an Asynchronous Always On Availability Group with
asynchronous commit. SQL Database and SQL Managed Instance HA are backed by
Service-Level Agreements . You can find more on SQL Database and SQL Managed
Instance Business Continuity in the official documentation.
You can check documentation for automated backups in SQL Managed Instance and
SQL Database to learn about RPO, RTO, retention, scheduling and other backup
capabilities and features.
You can use the Azure SQL migration extension for Azure Data Studio for SQL Server
assessment and migration, or choose among other migration tools.
How do I minimize downtime during the online migration?
The Link feature for Azure SQL Managed Instance offers the best possible minimum
downtime online migrations solution, meeting the needs of the most critical tier-1
applications.
See also
Frequently asked questions for SQL Server on Azure VMs
Azure SQL Managed Instance frequently asked questions (FAQ)
Azure SQL Database Hyperscale FAQ
Azure Hybrid Benefit FAQ
Azure security baseline for Azure SQL
Article • 05/31/2023
This security baseline applies guidance from the Microsoft cloud security benchmark
version 1.0 to Azure SQL. The Microsoft cloud security benchmark provides
recommendations on how you can secure your cloud solutions on Azure. The content is
grouped by the security controls defined by the Microsoft cloud security benchmark and
the related guidance applicable to Azure SQL.
You can monitor this security baseline and its recommendations using Microsoft
Defender for Cloud. Azure Policy definitions will be listed in the Regulatory Compliance
section of the Microsoft Defender for Cloud dashboard.
When a feature has relevant Azure Policy Definitions, they are listed in this baseline to
help you measure compliance to the Microsoft cloud security benchmark controls and
recommendations. Some recommendations may require a paid Microsoft Defender plan
to enable certain security scenarios.
7 Note
Features not applicable to Azure SQL have been excluded. To see how Azure SQL
completely maps to the Microsoft cloud security benchmark, see the full Azure SQL
security baseline mapping file .
Security profile
The security profile summarizes high-impact behaviors of Azure SQL, which may result
in increased security considerations.
Network security
For more information, see the Microsoft cloud security benchmark: Network security.
Features
Configuration Guidance: Deploy the service into a virtual network. Assign private IPs to
the resource (where applicable) unless there is a strong reason to assign public IPs
directly to the resource.
Reference: Use virtual network service endpoints and rules for servers in Azure SQL
Database
Description: Service network traffic respects Network Security Groups rule assignment
on its subnets. Learn more.
Configuration Guidance: Use Azure Virtual Network Service Tags to define network
access controls on network security groups or Azure Firewall configured for your Azure
SQL resources. You can use service tags in place of specific IP addresses when creating
security rules. By specifying the service tag name in the appropriate source or
destination field of a rule, you can allow or deny the traffic for the corresponding
service. Microsoft manages the address prefixes encompassed by the service tag and
automatically updates the service tag as addresses change. When using service
endpoints for Azure SQL Database, outbound to Azure SQL Database Public IP
addresses is required: Network Security Groups (NSGs) must be opened to Azure SQL
Database IPs to allow connectivity. You can do this by using NSG service tags for Azure
SQL Database.
Reference: Use virtual network service endpoints and rules for servers in Azure SQL
Database
Features
Description: Service native IP filtering capability for filtering network traffic (not to be
confused with NSG or Azure Firewall). Learn more.
Configuration Guidance: Deploy private endpoints for all Azure resources that support
the Private Link feature, to establish a private access point for the resources.
Reference: Azure Private Link for Azure SQL Database and Azure Synapse Analytics
Description: Service supports disabling public network access either through using
service-level IP ACL filtering rule (not NSG or Azure Firewall) or using a 'Disable Public
Network Access' toggle switch. Learn more.
Name
Description Effect(s) Version
Public network Disabling the public network access property improves Audit, 1.1.0
access on Azure security by ensuring your Azure SQL Database can only Deny,
SQL Database be accessed from a private endpoint. This configuration Disabled
should be denies all logins that match IP or virtual network based
disabled firewall rules.
Identity management
For more information, see the Microsoft cloud security benchmark: Identity management.
Features
Description: Service supports using Azure AD authentication for data plane access.
Learn more.
Configuration Guidance: Use Azure Active Directory (Azure AD) as the default
authentication method to control your data plane access.
Feature notes: Avoid the usage of local authentication methods or accounts, these
should be disabled wherever possible. Instead use Azure AD to authenticate where
possible.
Configuration Guidance: Restrict the use of local authentication methods for data plane
access. Instead, use Azure Active Directory (Azure AD) as the default authentication
method to control your data plane access.
Name
Description Effect(s) Version
Features
Managed Identities
Description: Data plane actions support authentication using managed identities. Learn
more.
Supported Enabled By Default Configuration Responsibility
Service Principals
Description: Data plane supports authentication using service principals. Learn more.
Feature notes: Azure SQL DB provides multiple ways to authenticate at the data plane,
one of which is Azure AD and includes managed identities and service principals.
Features
Description: Data plane access can be controlled using Azure AD Conditional Access
Policies. Learn more.
Configuration Guidance: Define the applicable conditions and criteria for Azure Active
Directory (Azure AD) conditional access in the workload. Consider common use cases
such as blocking or granting access from specific locations, blocking risky sign-in
behavior, or requiring organization-managed devices for specific applications.
Features
Description: Data plane supports native use of Azure Key Vault for credential and secrets
store. Learn more.
Feature notes: Cryptographic keys ONLY can be stored in AKV, not secrets nor user
credentials. For example, Transparent Data Encryption protector keys.
Privileged access
For more information, see the Microsoft cloud security benchmark: Privileged access.
Features
Description: Service has the concept of a local administrative account. Learn more.
Features
Description: Azure Role-Based Access Control (Azure RBAC) can be used to managed
access to service's data plane actions. Learn more.
Features
Customer Lockbox
Description: Customer Lockbox can be used for Microsoft support access. Learn more.
Features
Description: Tools (such as Azure Purview or Azure Information Protection) can be used
for data discovery and classification in the service. Learn more.
Features
Description: Service supports DLP solution to monitor sensitive data movement (in
customer's content). Learn more.
Feature notes: There are tools that can be used with SQL Server for DLP, but there is no
built-in support.
Name
Description Effect(s) Version
Azure Defender for SQL should be Audit each SQL Managed AuditIfNotExists, 1.0.2
enabled for unprotected SQL Instance without advanced Disabled
Managed Instances data security.
Features
Description: Service supports data in-transit encryption for data plane. Learn more.
Features
Description: Data at-rest encryption using platform keys is supported, any customer
content at rest is encrypted with these Microsoft managed keys. Learn more.
Reference: Transparent data encryption for SQL Database, SQL Managed Instance, and
Azure Synapse Analytics
Name
Description Effect(s) Version
Features
Configuration Guidance: If required for regulatory compliance, define the use case and
service scope where encryption using customer-managed keys are needed. Enable and
implement data at rest encryption using customer-managed key for those services.
Reference: Transparent data encryption for SQL Database, SQL Managed Instance, and
Azure Synapse Analytics
SQL managed Implementing Transparent Data Encryption (TDE) with Audit, 2.0.0
instances your own key provides you with increased transparency Deny,
should use and control over the TDE Protector, increased security Disabled
customer- with an HSM-backed external service, and promotion of
managed keys separation of duties. This recommendation applies to
to encrypt organizations with a related compliance requirement.
data at rest
SQL servers Implementing Transparent Data Encryption (TDE) with Audit, 2.0.1
should use your own key provides increased transparency and control Deny,
customer- over the TDE Protector, increased security with an HSM- Disabled
managed keys backed external service, and promotion of separation of
to encrypt duties. This recommendation applies to organizations
data at rest with a related compliance requirement.
Features
Description: The service supports Azure Key Vault integration for any customer keys,
secrets, or certificates. Learn more.
Feature notes: Certain features can use AKV for keys, for example, when using Always
Encrypted.
Configuration Guidance: Use Azure Key Vault to create and control the life cycle of your
encryption keys (TDE and Always Encrypted), including key generation, distribution, and
storage. Rotate and revoke your keys in Azure Key Vault and your service based on a
defined schedule or when there is a key retirement or compromise. When there is a
need to use customer-managed key (CMK) in the workload, service, or application level,
ensure you follow the best practices for key management. If you need to bring your own
key (BYOK) to the service (such as importing HSM-protected keys from your on-
premises HSMs into Azure Key Vault), follow recommended guidelines to perform initial
key generation and key transfer.
Reference: Configure Always Encrypted by using Azure Key Vault
Asset management
For more information, see the Microsoft cloud security benchmark: Asset management.
Features
Description: Service configurations can be monitored and enforced via Azure Policy.
Learn more.
Configuration Guidance: Use Microsoft Defender for Cloud to configure Azure Policy to
audit and enforce configurations of your Azure resources. Use Azure Monitor to create
alerts when there is a configuration deviation detected on the resources. Use Azure
Policy [deny] and [deploy if not exists] effects to enforce secure configuration across
Azure resources.
Reference: Azure Policy built-in definitions for Azure SQL Database & SQL Managed
Instance
Features
Configuration Guidance: Microsoft Defender for Azure SQL helps you discover and
mitigate potential database vulnerabilities and alerts you to anomalous activities that
may be an indication of a threat to your databases.
Name
Description Effect(s) Version
Azure Defender for SQL should be Audit SQL servers without AuditIfNotExists, 2.0.1
enabled for unprotected Azure SQL Advanced Data Security Disabled
servers
Azure Defender for SQL should be Audit each SQL Managed AuditIfNotExists, 1.0.2
enabled for unprotected SQL Instance without advanced Disabled
Managed Instances data security.
Enable logging at the server level as this will filter down to databases, too.
Name
Description Effect(s) Version
Features
Description: Service produces resource logs that can provide enhanced service-specific
metrics and logging. The customer can configure these resource logs and send them to
their own data sink like a storage account or log analytics workspace. Learn more.
Configuration Guidance: Enable resource logs for the service. For example, Key Vault
supports additional resource logs for actions that get a secret from a key vault or and
Azure SQL has resource logs that track requests to a database. The content of resource
logs varies by the Azure service and resource type.
Features
Azure Backup
Description: The service can be backed up by the Azure Backup service. Learn more.
Supported Enabled By Default Configuration Responsibility
Description: Service supports its own native backup capability (if not using Azure
Backup). Learn more.
Next steps
See the Microsoft cloud security benchmark overview
Learn more about Azure security baselines
SQL vulnerability assessment helps you
identify database vulnerabilities
Article • 06/15/2023
SQL vulnerability assessment is an easy-to-configure service that can discover, track, and
help you remediate potential database vulnerabilities. Use it to proactively improve your
database security for:
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
Vulnerability assessment is part of Microsoft Defender for Azure SQL, which is a unified
package for advanced SQL security capabilities. Vulnerability assessment can be
accessed and managed from each SQL database resource in the Azure portal.
7 Note
Vulnerability assessment is supported for Azure SQL Database, Azure SQL Managed
Instance, and Azure Synapse Analytics. Databases in Azure SQL Database, Azure
SQL Managed Instance, and Azure Synapse Analytics are referred to collectively in
the remainder of this article as databases, and the server is referring to the server
that hosts databases for Azure SQL Database and Azure Synapse.
Vulnerability assessment is a scanning service built into Azure SQL Database. The service
employs a knowledge base of rules that flag security vulnerabilities. It highlights
deviations from best practices, such as misconfigurations, excessive permissions, and
unprotected sensitive data.
The rules are based on Microsoft's best practices and focus on the security issues that
present the biggest risks to your database and its valuable data. They cover database-
level issues and server-level security issues, like server firewall settings and server-level
permissions.
Results of the scan include actionable steps to resolve each issue and provide
customized remediation scripts where applicable. You can customize an assessment
report for your environment by setting an acceptable baseline for:
Permission configurations
Feature configurations
Database settings
Express configuration – The default procedure that lets you configure vulnerability
assessment without dependency on external storage to store baseline and scan
result data.
SQL Flavors • Azure Synapse Dedicated SQL Pools • Azure SQL Managed Instance
Supported • Subscription
• Subscription
• Database
• Scan scheduling is internal and not Scan scheduling is internal and not
configurable configurable
Supported All vulnerability assessment rules for All vulnerability assessment rules for
Rules the supported resource type. the supported resource type.
Parameter Express configuration Classic configuration
• Single rule
Apply baseline Will take effect without rescanning the Will take effect only after rescanning
database the database
Scan export Azure Resource Graph Excel format, Azure Resource Graph
Next steps
Enable SQL vulnerability assessments
Express configuration common questions and Troubleshooting.
Learn more about Microsoft Defender for Azure SQL.
Learn more about data discovery and classification.
Learn more about storing vulnerability assessment scan results in a storage
account accessible behind firewalls and VNets.
Monitor your SQL deployments with SQL Insights (preview)
Article • 09/21/2022
Applies to:
SQL Server on Azure VM
Azure SQL Database
Azure SQL Managed Instance
SQL Insights (preview) is a comprehensive solution for monitoring any product in the Azure SQL family. SQL Insights uses dynamic
management views to expose the data that you need to monitor health, diagnose problems, and tune performance.
SQL Insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and
remotely gather data. The gathered data is stored in Azure Monitor Logs to enable easy aggregation, filtering, and trend analysis. You
can view the collected data from the SQL Insights workbook template, or you can delve directly into the data by using log queries.
The following diagram details the steps taken by information from the database engine and Azure resource logs, and how they can
be surfaced. For a more detailed diagram of Azure SQL logging, see Monitoring and diagnostic telemetry.
Logs
Log alerts
Gathered Collection InsightsMetrics SQL Insights
Database engine Agent Workbooks
telemetry Table
Log
VM Analytics
SQL Database,
SQL Managed Instance, or
SQL Server on Azure VMs
Pricing
There is no direct cost for SQL Insights (preview). All costs are incurred by the virtual machines that gather the data, the Log Analytics
workspaces that store the data, and any alert rules configured on the data.
Virtual machines
For virtual machines, you're charged based on the pricing published on the virtual machines pricing page . The number of virtual
machines that you need will vary based on the number of connection strings you want to monitor. We recommend allocating one
virtual machine of size Standard_B2s for every 100 connection strings. For more information, see Azure virtual machine requirements.
Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data will vary based on
your database activity and the collection settings defined in your monitoring profiles.
Alert rules
For alert rules in Azure Monitor, you're charged based on the pricing published on the Azure Monitor pricing page . If you choose
to create alerts with SQL Insights (preview), you're charged for any alert rules created and any notifications sent.
Supported versions
SQL Insights (preview) supports the following versions of SQL Server:
SQL Insights (preview) supports SQL Server running in the following environments:
Non-Azure instances: SQL Server running on virtual machines outside Azure is not supported.
Azure SQL Database elastic pools: Metrics can't be gathered for elastic pools or for databases within elastic pools.
Azure SQL Database low service tiers: Metrics can't be gathered for databases on Basic, S0, S1, and S2 service tiers.
Azure SQL Database serverless tier: Metrics can be gathered for databases through the serverless compute tier. However, the
process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state.
Secondary replicas: Metrics can be gathered for only a single secondary replica per database. If a database has more than one
secondary replica, only one can be monitored.
Authentication with Azure Active Directory: The only supported method of authentication for monitoring is SQL authentication.
For SQL Server on Azure Virtual Machines, authentication through Active Directory on a custom domain controller is not
supported.
Regional availability
SQL Insights (preview) is available in all Azure regions where Azure Monitor is available , with the exception of Azure Government
and national clouds.
For more instructions, see Enable SQL Insights (preview) and Troubleshoot SQL Insights (preview).
Collected data
SQL Insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL Server.
SQL Insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual
machine has the Azure Monitor agent and the Workload Insights (WLI) extension installed.
The WLI extension includes the open-source Telegraf agent . SQL Insights uses data collection rules to specify the data collection
settings for Telegraf's SQL Server plug-in .
Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The following tables
describe the available data. You can customize which datasets to collect and the frequency of collection when you create a
monitoring profile.
Friendly name: Name of the query as shown in the Azure portal when you're creating a monitoring profile.
Configuration name: Name of the query as shown in the Azure portal when you're editing a monitoring profile.
Namespace: Name of the query as found in a Log Analytics workspace. This identifier appears in the InsighstMetrics table on
the Namespace property in the Tags column.
DMVs: Dynamic managed views that are used to produce the dataset.
Enabled by default: Whether the data is collected by default.
Default collection frequency: How often the data is collected by default.
sys.database_service_objectives
sys.dm_exec_sql_text
sys.dm_hadr_availability_group_states
Next steps
For frequently asked questions about SQL Insights (preview), see Frequently asked questions.
Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance
Tutorial: Getting started with Always
Encrypted
Article • 02/28/2023
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
This tutorial teaches you how to get started with Always Encrypted. It will show you:
7 Note
If you're looking for information on Always Encrypted with secure enclaves, see
the following tutorials instead:
Prerequisites
For this tutorial, you need:
An empty database in Azure SQL Database, Azure SQL Managed Instance, or SQL
Server. The below instructions assume the database name is ContosoHR. You need
to be an owner of the database (a member of the db_owner role). For information
on how to create a database, see Quickstart: Create a single database - Azure SQL
Database or Create a database in SQL Server.
Optional, but recommended, especially if your database is in Azure: a key vault in
Azure Key Vault. For information on how to create a key vault, see Quickstart:
Create a key vault using the Azure portal.
If your key vault uses the access policy permissions model, make sure you have
the following key permissions in the key vault: get , list , create , unwrap key ,
wrap key , verify , sign . See Assign a Key Vault access policy.
If you're using the Azure role-based access control (RBAC) permission model,
make you sure you're a member of the Key Vault Crypto Officer role for your
key vault. See Provide access to Key Vault keys, certificates, and secrets with an
Azure role-based access control.
The latest version of SQL Server Management Studio (SSMS) or the latest version
of the SqlServer and Az PowerShell modules. The Az PowerShell module is required
only if you're using Azure Key Vault.
SSMS
3. Paste in and execute the below statements to create a new table, named
Employees.
SQL
GO
) ON [PRIMARY];
4. Paste in and execute the below statements to add a few employee records to
the Employees table.
SQL
[SSN]
, [FirstName]
, [LastName]
, [Salary]
VALUES
'795-73-9838'
, N'Catherine'
, N'Abel'
, $31692
);
[SSN]
, [FirstName]
, [LastName]
, [Salary]
VALUES
'990-00-6818'
, N'Kim'
, N'Abercrombie'
, $55415
);
SSMS
SSMS provides a wizard that helps you easily configure Always Encrypted by setting
up a column master key, a column encryption key, and encrypt selected columns.
2. Right-click the Employees table and select Encrypt Columns to open the
Always Encrypted wizard.
3. Select Next on the Introduction page of the wizard.
b. Leave the default selection of Current User - this will instruct the
wizard to generate a certificate (your new column master key) in the
Current User store.
c. Select Next.
7. On the Run Settings page, you're asked if you want to proceed with
encryption or generate a PowerShell script to be executed later. Leave the
default settings and select Next.
8. On the Summary page, the wizard informs you about the actions it will
execute. Check all the information is correct and select Finish.
9. On the Results page, you can monitor the progress of the wizard's operations.
Wait until all operations complete successfully and select Close.
10. (Optional) Explore the changes the wizard has made in your database.
a. Expand ContosoHR > Security > Always Encrypted Keys to explore the
metadata objects for the column master key and the column encryption
that the wizard created.
b. You can also run the below queries against the system catalog views that
contain key metadata.
SQL
c. In Object Explorer, right-click the Employees table and select Script Table
as > CREATE To > New Query Editor Window. This will open a new query
window with the CREATE TABLE statement for the Employees table. Note
the ENCRYPTED WITH clause that appears in the definitions of the SSN
and Salary columns.
d. You can also run the below query against sys.columns to retrieve column-
level encryption metadata for the two encrypted columns.
SQL
SELECT
[name]
, [encryption_type]
, [encryption_type_desc]
, [encryption_algorithm_name]
, [column_encryption_key_id]
FROM sys.columns
SQL
3. Connect to your database with Always Encrypted enabled for your connection.
a. Right-click anywhere in the query window and select Connection > Change
Connection. This will open the Connect to Database Engine dialog.
b. Select Options <<. This will show additional tabs in the Connect to
Database Engine dialog.
c. Select the Always Encrypted tab.
d. Select Enable Always Encrypted (column encryption).
e. Select Connect.
4. Rerun the same query. Since you're connected with Always Encrypted enabled
for your database connection, the client driver in SSMS will attempt to decrypt
data stored in both encrypted columns. If you use Azure Key Vault, you may
be prompted to sign into Azure.
5. Enable Parameterization for Always Encrypted. This feature allows you to run
queries that filter data by encrypted columns (or insert data to encrypted
columns).
a. Select Query from the main menu of SSMS.
b. Select Query Options....
c. Navigate to Execution > Advanced.
d. Make sure Enable Parameterization for Always Encrypted is checked.
e. Select OK.
6. Paste in and execute the below query, which filters data by the encrypted SSN
column. The query should return one row containing plaintext values.
SQL
7. Optionally, if you're using Azure Key Vault configured with the access policy
permissions model, follow the below steps to see what happens when a user
tries to retrieve plaintext data from encrypted columns without having access
to the column master key protecting the data.
a. Remove the key unwrap permission for yourself in the access policy for your
key vault. For more information, see Assign a Key Vault access policy.
b. Since the client driver in SSMS caches the column encryption keys acquired
from a key vault for 2 hours, close SSMS and open it again. This will ensure
the key cache is empty.
c. Connect to your database with Always Encrypted enabled for your
connection.
d. Paste in and execute the following query. The query should fail with the
error message indicating you're missing the required unwrap permission.
SQL
Next steps
Develop applications using Always Encrypted
See also
Always Encrypted documentation
Always Encrypted with secure enclaves documentation
Provision Always Encrypted keys using SQL Server Management Studio
Configure Always Encrypted using PowerShell
Always Encrypted wizard
Query columns using Always Encrypted with SQL Server Management Studio
Copy and transform data in Azure SQL Database by using Azure
Data Factory or Azure Synapse Analytics
Article • 04/06/2023
APPLIES TO:
Azure Data Factory
Azure Synapse Analytics
This article outlines how to use Copy Activity in Azure Data Factory or Azure Synapse pipelines to copy data from and to Azure SQL
Database, and use Data Flow to transform data in Azure SQL Database. To learn more, read the introductory article for Azure Data Factory
or Azure Synapse Analytics.
Supported capabilities
This Azure SQL Database connector is supported for the following capabilities:
Lookup activity ①② ✓
GetMetadata activity ①② ✓
Script activity ①② ✓
Copying data by using SQL authentication and Azure Active Directory (Azure AD) Application token authentication with a service
principal or managed identities for Azure resources.
As a source, retrieving data by using a SQL query or a stored procedure. You can also choose to parallel copy from an Azure SQL
Database source, see the Parallel copy from SQL database section for details.
As a sink, automatically creating destination table if not exists based on the source schema; appending data to a table or invoking a
stored procedure with custom logic during the copy.
If you use Azure SQL Database serverless tier, note when the server is paused, activity run fails instead of waiting for the auto resume to be
ready. You can add activity retry or chain additional activities to make sure the server is live upon the actual execution.
) Important
If you copy data by using the Azure integration runtime, configure a server-level firewall rule so that Azure services can access the
server.
If you copy data by using a self-hosted integration runtime, configure the firewall to allow the appropriate IP range. This range
includes the machine's IP that's used to connect to Azure SQL Database.
Get started
To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:
2. Search for SQL and select the Azure SQL Database connector.
3. Configure the service details, test the connection, and create the new linked service.
Connector configuration details
The following sections provide details about properties that are used to define Azure Data Factory or Synapse pipeline entities specific to
an Azure SQL Database connector.
connectionString Specify information needed to connect to the Azure SQL Database instance for the connectionString property.
Yes
You also can put a password or service principal key in Azure Key Vault. If it's SQL authentication, pull the password
configuration out of the connection string. For more information, see the JSON example following the table and
Store credentials in Azure Key Vault.
azureCloudType For service principal authentication, specify the type of Azure cloud environment to which your Azure AD application No
is registered.
Allowed values are AzurePublic, AzureChina, AzureUsGovernment, and AzureGermany. By default, the data factory
or Synapse pipeline's cloud environment is used.
alwaysEncryptedSettings Specify alwaysencryptedsettings information that's needed to enable Always Encrypted to protect sensitive data No
stored in SQL server by using either managed identity or service principal. For more information, see the JSON
example following the table and Using Always Encrypted section. If not specified, the default always encrypted
setting is disabled.
Property Description Required
connectVia This integration runtime is used to connect to the data store. You can use the Azure integration runtime or a self- No
hosted integration runtime if your data store is located in a private network. If not specified, the default Azure
integration runtime is used.
For different authentication types, refer to the following sections on specific properties, prerequisites and JSON samples, respectively:
SQL authentication
Service principal authentication
System-assigned managed identity authentication
User-assigned managed identity authentication
Tip
If you hit an error with the error code "UserErrorFailedToConnectToSqlServer" and a message like "The session limit for the database is
XXX and has been reached," add Pooling=false to your connection string and try again. Pooling=false is also recommended for
SHIR(Self Hosted Integration Runtime) type linked service setup. Pooling and other connection parameters can be added as new
parameter names and values in Additional connection properties section of linked service creation form.
SQL authentication
To use SQL authentication authentication type, specify the generic properties that are described in the preceding section.
JSON
"name": "AzureSqlDbLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
},
"connectVia": {
"type": "IntegrationRuntimeReference"
JSON
"name": "AzureSqlDbLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"type": "LinkedServiceReference"
},
"secretName": "<secretName>"
},
"connectVia": {
"type": "IntegrationRuntimeReference"
"name": "AzureSqlDbLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
},
"alwaysEncryptedSettings": {
"alwaysEncryptedAkvAuthType": "ServicePrincipal",
"servicePrincipalKey": {
"type": "SecureString",
},
"connectVia": {
"type": "IntegrationRuntimeReference"
servicePrincipalKey Specify the application's key. Mark this field as SecureString to store it securely or reference a secret stored in Azure Key Yes
Vault.
tenant Specify the tenant information, like the domain name or tenant ID, under which your application resides. Retrieve it by Yes
hovering the mouse in the upper-right corner of the Azure portal.
1. Create an Azure Active Directory application from the Azure portal. Make note of the application name and the following values that
define the linked service:
Application ID
Application key
Tenant ID
2. Provision an Azure Active Directory administrator for your server on the Azure portal if you haven't already done so. The Azure AD
administrator must be an Azure AD user or Azure AD group, but it can't be a service principal. This step is done so that, in the next
step, you can use an Azure AD identity to create a contained database user for the service principal.
3. Create contained database users for the service principal. Connect to the database from or to which you want to copy data by using
tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following
T-SQL:
SQL
4. Grant the service principal needed permissions as you normally do for SQL users or others. Run the following code. For more options,
see this document.
SQL
5. Configure an Azure SQL Database linked service in an Azure Data Factory or Synapse workspace.
Linked service example that uses service principal authentication
JSON
"name": "AzureSqlDbLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
"servicePrincipalKey": {
"type": "SecureString",
},
},
"connectVia": {
"type": "IntegrationRuntimeReference"
To use system-assigned managed identity authentication, specify the generic properties that are described in the preceding section, and
follow these steps.
1. Provision an Azure Active Directory administrator for your server on the Azure portal if you haven't already done so. The Azure AD
administrator can be an Azure AD user or an Azure AD group. If you grant the group with managed identity an admin role, skip steps
3 and 4. The administrator has full access to the database.
2. Create contained database users for the managed identity. Connect to the database from or to which you want to copy data by using
tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run the following
T-SQL:
SQL
3. Grant the managed identity needed permissions as you normally do for SQL users and others. Run the following code. For more
options, see this document.
SQL
Example
JSON
"name": "AzureSqlDbLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
},
"connectVia": {
"type": "IntegrationRuntimeReference"
To use user-assigned managed identity authentication, in addition to the generic properties that are described in the preceding section,
specify the following properties:
credentials Specify the user-assigned managed identity as the credential object. Yes
1. Provision an Azure Active Directory administrator for your server on the Azure portal if you haven't already done so. The Azure AD
administrator can be an Azure AD user or an Azure AD group. If you grant the group with user-assigned managed identity an admin
role, skip steps 3. The administrator has full access to the database.
2. Create contained database users for the user-assigned managed identity. Connect to the database from or to which you want to copy
data by using tools like SQL Server Management Studio, with an Azure AD identity that has at least ALTER ANY USER permission. Run
the following T-SQL:
SQL
3. Create one or multiple user-assigned managed identities and grant the user-assigned managed identity needed permissions as you
normally do for SQL users and others. Run the following code. For more options, see this document.
SQL
4. Assign one or multiple user-assigned managed identities to your data factory and create credentials for each user-assigned managed
identity.
Example:
JSON
"name": "AzureSqlDbLinkedService",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
"credential": {
"referenceName": "credential1",
"type": "CredentialReference"
},
"connectVia": {
"type": "IntegrationRuntimeReference"
Dataset properties
For a full list of sections and properties available to define datasets, see Datasets.
The following properties are supported for Azure SQL Database dataset:
type The type property of the dataset must be set to AzureSqlTable. Yes
tableName Name of the table/view with schema. This property is supported for backward compatibility. For new workload, use No for source, Yes for
schema and table . sink
"name": "AzureSQLDbDataset",
"properties":
"type": "AzureSqlTable",
"linkedServiceName": {
"type": "LinkedServiceReference"
},
"typeProperties": {
"schema": "<schema_name>",
"table": "<table_name>"
Tip
To load data from Azure SQL Database efficiently by using data partitioning, learn more from Parallel copy from SQL database.
To copy data from Azure SQL Database, the following properties are supported in the copy activity source section:
type The type property of the copy activity source must be set to AzureSqlSource. "SqlSource" type is still Yes
supported for backward compatibility.
sqlReaderQuery This property uses the custom SQL query to read data. An example is select * from MyTable . No
sqlReaderStoredProcedureName The name of the stored procedure that reads data from the source table. The last SQL statement must be a No
SELECT statement in the stored procedure.
isolationLevel Specifies the transaction locking behavior for the SQL source. The allowed values are: ReadCommitted, No
ReadUncommitted, RepeatableRead, Serializable, Snapshot. If not specified, the database's default isolation
level is used. Refer to this doc for more details.
Property Description Required
partitionOptions Specifies the data partitioning options used to load data from Azure SQL Database.
No
Allowed values are: None (default), PhysicalPartitionsOfTable, and DynamicRange.
When a partition option is enabled (that is, not None ), the degree of parallelism to concurrently load data
from an Azure SQL Database is controlled by the parallelCopies setting on the copy activity.
Under partitionSettings :
partitionColumnName Specify the name of the source column in integer or date/datetime type ( int , smallint , bigint , date , No
smalldatetime , datetime , datetime2 , or datetimeoffset ) that will be used by range partitioning for parallel
copy. If not specified, the index or the primary key of the table is autodetected and used as the partition
column.
Apply when the partition option is DynamicRange . If you use a query to retrieve the source data, hook ?
AdfDynamicRangePartitionCondition in the WHERE clause. For an example, see the Parallel copy from SQL
database section.
partitionUpperBound The maximum value of the partition column for partition range splitting. This value is used to decide the No
partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and
copied. If not specified, copy activity auto detect the value.
Apply when the partition option is DynamicRange . For an example, see the Parallel copy from SQL database
section.
partitionLowerBound The minimum value of the partition column for partition range splitting. This value is used to decide the No
partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and
copied. If not specified, copy activity auto detect the value.
Apply when the partition option is DynamicRange . For an example, see the Parallel copy from SQL database
section.
If sqlReaderQuery is specified for AzureSqlSource, the copy activity runs this query against the Azure SQL Database source to get the
data. You also can specify a stored procedure by specifying sqlReaderStoredProcedureName and storedProcedureParameters if the
stored procedure takes parameters.
When using stored procedure in source to retrieve data, note if your stored procedure is designed as returning different schema when
different parameter value is passed in, you may encounter failure or see unexpected result when importing schema from UI or when
copying data to SQL database with auto table creation.
JSON
"activities":[
"name": "CopyFromAzureSQLDatabase",
"type": "Copy",
"inputs": [
"type": "DatasetReference"
],
"outputs": [
"type": "DatasetReference"
],
"typeProperties": {
"source": {
"type": "AzureSqlSource",
},
"sink": {
JSON
"activities":[
"name": "CopyFromAzureSQLDatabase",
"type": "Copy",
"inputs": [
"type": "DatasetReference"
],
"outputs": [
"type": "DatasetReference"
],
"typeProperties": {
"source": {
"type": "AzureSqlSource",
"sqlReaderStoredProcedureName": "CopyTestSrcStoredProcedureWithParameters",
"storedProcedureParameters": {
},
"sink": {
@stringData varchar(20),
@identifier int
AS
BEGIN
select *
from dbo.UnitTestSrcTable
END
GO
Tip
Learn more about the supported write behaviors, configurations, and best practices from Best practice for loading data into Azure
SQL Database.
To copy data to Azure SQL Database, the following properties are supported in the copy activity sink section:
Property Description
type The type property of the copy activity sink must be set to AzureSqlSink. "SqlSink" type is still supported for backward com
preCopyScript Specify a SQL query for the copy activity to run before writing data into Azure SQL Database. It's invoked only once per c
preloaded data.
Property Description
tableOption Specifies whether to automatically create the sink table if not exists based on the source schema.
Auto table creation is not supported when sink specifies stored procedure.
sqlWriterStoredProcedureName The name of the stored procedure that defines how to apply source data into a target table.
This stored procedure is invoked per batch. For operations that run only once and have nothing to do with source data, fo
preCopyScript property.
storedProcedureTableTypeParameterName The parameter name of the table type specified in the stored procedure.
sqlWriterTableType The table type name to be used in the stored procedure. The copy activity makes the data being moved available in a tem
procedure code can then merge the data that's being copied with existing data.
Allowed values are name and value pairs. Names and casing of parameters must match the names and casing of the store
writeBatchSize Number of rows to insert into the SQL table per batch.
The allowed value is integer (number of rows). By default, the service dynamically determines the appropriate batch size b
writeBatchTimeout The wait time for the batch insert operation to finish before it times out.
disableMetricsCollection The service collects metrics such as Azure SQL Database DTUs for copy performance optimization and recommendations,
access. If you are concerned with this behavior, specify true to turn it off.
maxConcurrentConnections The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when
WriteBehavior Specify the write behavior for copy activity to load data into Azure SQL Database.
The allowed value is Insert and Upsert. By default, the service uses insert to load data.
Under upsertSettings :
useTempDB Specify whether to use the a global temporary table or physical table as the interim table for upsert.
By default, the service uses global temporary table as the interim table. value is true .
interimSchemaName Specify the interim schema for creating interim table if physical table is used. Note: user need to have the permission for
interim table will share the same schema as sink table.
keys Specify the column names for unique row identification. Either a single key or a series of keys can be used. If not specified
JSON
"activities":[
"name": "CopyToAzureSQLDatabase",
"type": "Copy",
"inputs": [
"type": "DatasetReference"
],
"outputs": [
"type": "DatasetReference"
],
"typeProperties": {
"source": {
},
"sink": {
"type": "AzureSqlSink",
"tableOption": "autoCreate",
"writeBatchSize": 100000
Learn more details from Invoke a stored procedure from a SQL sink.
JSON
"activities":[
"name": "CopyToAzureSQLDatabase",
"type": "Copy",
"inputs": [
"type": "DatasetReference"
],
"outputs": [
"type": "DatasetReference"
],
"typeProperties": {
"source": {
},
"sink": {
"type": "AzureSqlSink",
"sqlWriterStoredProcedureName": "CopyTestStoredProcedureWithParameters",
"storedProcedureTableTypeParameterName": "MyTable",
"sqlWriterTableType": "MyTableType",
"storedProcedureParameters": {
JSON
"activities":[
"name": "CopyToAzureSQLDatabase",
"type": "Copy",
"inputs": [
"type": "DatasetReference"
],
"outputs": [
"type": "DatasetReference"
],
"typeProperties": {
"source": {
},
"sink": {
"type": "AzureSqlSink",
"tableOption": "autoCreate",
"writeBehavior": "upsert",
"upsertSettings": {
"useTempDB": true,
"keys": [
"<column name>"
},
When you enable partitioned copy, copy activity runs parallel queries against your Azure SQL Database source to load data by partitions.
The parallel degree is controlled by the parallelCopies setting on the copy activity. For example, if you set parallelCopies to four, the
service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a
portion of data from your Azure SQL Database.
You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Azure SQL
Database. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's
recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a
single file.
Full load from large table, with physical partitions. Partition option: Physical partitions of table.
During execution, the service automatically detects the physical partitions, and copies data by
partitions.
To check if your table has physical partition or not, you can refer to this query.
Full load from large table, without physical partitions, Partition options: Dynamic range partition.
while with an integer or datetime column for data Partition column (optional): Specify the column used to partition data. If not specified, the index
partitioning. or primary key column is used.
Partition upper bound and partition lower bound (optional): Specify if you want to determine the
partition stride. This is not for filtering the rows in table, all rows in the table will be partitioned
and copied. If not specified, copy activity auto detect the values.
For example, if your partition column "ID" has values range from 1 to 100, and you set the lower
bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4
partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively.
Scenario Suggested settings
Load a large amount of data by using a custom query, Partition options: Dynamic range partition.
without physical partitions, while with an integer or Query: SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND
date/datetime column for data partitioning. <your_additional_where_clause> .
Partition upper bound and partition lower bound (optional): Specify if you want to determine the
partition stride. This is not for filtering the rows in table, all rows in the query result will be
partitioned and copied. If not specified, copy activity auto detect the value.
During execution, the service replaces ?AdfRangePartitionColumnName with the actual column name
and value ranges for each partition, and sends to Azure SQL Database.
For example, if your partition column "ID" has values range from 1 to 100, and you set the lower
bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4
partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively.
2. Query from a table with column selection and additional where-clause filters:
1. Choose distinctive column as partition column (like primary key or unique key) to avoid data skew.
2. If the table has built-in partition, use partition option "Physical partitions of table" to get better performance.
3. If you use Azure Integration Runtime to copy data, you can set larger "Data Integration Units (DIU)" (>4) to utilize more computing
resource. Check the applicable scenarios there.
4. "Degree of copy parallelism" control the partition numbers, setting this number too large sometime hurts the performance,
recommend setting this number as (DIU or number of Self-hosted IR nodes) * (2 to 4).
JSON
"source": {
"type": "AzureSqlSource",
"partitionOption": "PhysicalPartitionsOfTable"
JSON
"source": {
"type": "AzureSqlSource",
"partitionOption": "DynamicRange",
"partitionSettings": {
"partitionColumnName": "<partition_column_name>",
SELECT DISTINCT s.name AS SchemaName, t.name AS TableName, pf.name AS PartitionFunctionName, c.name AS ColumnName,
iif(pf.name is null, 'no', 'yes') AS HasPartition
FROM sys.tables AS t
LEFT JOIN sys.index_columns AS ic ON ic.partition_ordinal > 0 AND ic.index_id = i.index_id AND ic.object_id = t.object_id
If the table has physical partition, you would see "HasPartition" as "yes" like the following.
Refer to the respective sections about how to configure in the service and best practices.
Append data
Appending data is the default behavior of this Azure SQL Database sink connector. the service does a bulk insert to write to your table
efficiently. You can configure the source and sink accordingly in the copy activity.
Upsert data
Copy activity now supports natively loading data into a database temporary table and then update the data in sink table if key exists and
otherwise insert new data. To learn more about upsert settings in copy activities, see Azure SQL Database as the sink.
You can use a stored procedure when built-in copy mechanisms don't serve the purpose. An example is when you want to apply extra
processing before the final insertion of source data into the destination table. Some extra processing examples are when you want to
merge columns, look up additional values, and insert into more than one table.
The following sample shows how to use a stored procedure to do an upsert into a table in Azure SQL Database. Assume that the input data
and the sink Marketing table each have three columns: ProfileID, State, and Category. Do the upsert based on the ProfileID column, and
only apply it for a specific category called "ProductA".
1. In your database, define the table type with the same name as sqlWriterTableType. The schema of the table type is the same as the
schema returned by your input data.
SQL
2. In your database, define the stored procedure with the same name as sqlWriterStoredProcedureName. It handles input data from
your specified source and merges into the output table. The parameter name of the table type in the stored procedure is the same as
tableName defined in the dataset.
SQL
AS
BEGIN
END
3. In your Azure Data Factory or Synapse pipeline, define the SQL sink section in the copy activity as follows:
JSON
"sink": {
"type": "AzureSqlSink",
"sqlWriterStoredProcedureName": "spOverwriteMarketing",
"storedProcedureTableTypeParameterName": "Marketing",
"sqlWriterTableType": "MarketingType",
"storedProcedureParameters": {
"category": {
"value": "ProductA"
When writing data to into Azure SQL Database using stored procedure, the sink splits the source data into mini batches then do the insert,
so the extra query in stored procedure can be executed multiple times. If you have the query for the copy activity to run before writing data
into Azure SQL Database, it's not recommended to add it to the stored procedure, add it in the Pre-copy script box.
Source transformation
Settings specific to Azure SQL Database are available in the Source Options tab of the source transformation.
Input: Select whether you point your source at a table (equivalent of Select * from <table-name> ) or enter a custom SQL query.
Query: If you select Query in the input field, enter a SQL query for your source. This setting overrides any table that you've chosen in the
dataset. Order By clauses aren't supported here, but you can set a full SELECT FROM statement. You can also use user-defined table
functions. select * from udfGetData() is a UDF in SQL that returns a table. This query will produce a source table that you can use in your
data flow. Using queries is also a great way to reduce rows for testing or for lookups.
Tip
The common table expression (CTE) in SQL is not supported in the mapping data flow Query mode, because the prerequisite of using
this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this.
To use CTEs, you need to create a stored
procedure using the following query:
SQL
AS
BEGIN
END
Then use the Stored procedure mode in the source transformation of the mapping data flow and set the @query like example with
CTE as (select 'test' as a) select * from CTE . Then you can use CTEs as expected.
Stored procedure: Choose this option if you wish to generate a projection and source data from a stored procedure that is executed from
your source database. You can type in the schema, procedure name, and parameters, or click on Refresh to ask the service to discover the
schemas and procedure names. Then you can click on Import to import all procedure parameters using the form @paraName .
SQL Example: Select * from MyTable where customerId > 1000 and customerId < 2000
Parameterized SQL Example: "select * from {$tablename} where orderyear > {$year}"
Batch size: Enter a batch size to chunk large data into reads.
Isolation Level: The default for SQL sources in mapping data flow is read uncommitted. You can change the isolation level here to one of
these values:
Read Committed
Read Uncommitted
Repeatable Read
Serializable
None (ignore isolation level)
Enable incremental extract: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline
executed.
Incremental column: When using the incremental extract feature, you must choose the date/time or numeric column that you wish to use
as the watermark in your source table.
Enable native change data capture(Preview): Use this option to tell ADF to only process delta data captured by SQL change data capture
technology since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be
loaded automatically without any incremental column required. You need to enable change data capture on Azure SQL DB before using this
option in ADF. For more information about this option in ADF, see native change data capture.
Start reading from beginning: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline
with incremental extract turned on.
Sink transformation
Settings specific to Azure SQL Database are available in the Settings tab of the sink transformation.
Update method: Determines what operations are allowed on your database destination. The default is to only allow inserts. To update,
upsert, or delete rows, an alter-row transformation is required to tag rows for those actions. For updates, upserts and deletes, a key column
or columns must be set to determine which row to alter.
The column name that you pick as the key here will be used by the service as part of the subsequent update, upsert, delete. Therefore, you
must pick a column that exists in the Sink mapping. If you wish to not write the value to this key column, then click "Skip writing key
columns".
You can parameterize the key column used here for updating your target Azure SQL Database table. If you have multiple columns for a
composite key, the click on "Custom Expression" and you will be able to add dynamic content using the data flow expression language,
which can include an array of strings with column names for a composite key.
Table action: Determines whether to recreate or remove all rows from the destination table prior to writing.
Batch size: Controls how many rows are being written in each bucket. Larger batch sizes improve compression and memory optimization,
but risk out of memory exceptions when caching data.
Use TempDB: By default, the service will use a global temporary table to store data as part of the loading process. You can alternatively
uncheck the "Use TempDB" option and instead, ask the service to store the temporary holding table in a user database that is located in the
database that is being used for this Sink.
Pre and Post SQL scripts: Enter multi-line SQL scripts that will execute before (pre-processing) and after (post-processing) data is written to
your Sink database
Tip
1. It's recommended to break single batch scripts with multiple commands into multiple batches.
2. Only Data Definition Language (DDL) and Data Manipulation Language (DML) statements that return a simple update count can
be run as part of a batch. Learn more from Performing batch operations
By default, a data flow run will fail on the first error it gets. You can choose to Continue on error that allows your data flow to complete
even if individual rows have errors. The service provides different options for you to handle these error rows.
Transaction Commit: Choose whether your data gets written in a single transaction or in batches. Single transaction will provide worse
performance but no data written will be visible to others until the transaction completes.
Output rejected data: If enabled, you can output the error rows into a csv file in Azure Blob Storage or an Azure Data Lake Storage Gen2
account of your choosing. This will write the error rows with three additional columns: the SQL operation like INSERT or UPDATE, the data
flow error code, and the error message on the row.
Report success on error: If enabled, the data flow will be marked as a success even if error rows are found.
Data type mapping for Azure SQL Database
When data is copied from or to Azure SQL Database, the following mappings are used from Azure SQL Database data types to Azure Data
Factory interim data types. The same mappings are used by the Synapse pipeline feature, which implements Azure Data Factory directly. To
learn how the copy activity maps the source schema and data type to the sink, see Schema and data type mappings.
Azure SQL Database data type Data Factory interim data type
bigint Int64
binary Byte[]
bit Boolean
date DateTime
Datetime DateTime
datetime2 DateTime
Datetimeoffset DateTimeOffset
Decimal Decimal
Float Double
image Byte[]
int Int32
money Decimal
numeric Decimal
Azure SQL Database data type Data Factory interim data type
real Single
rowversion Byte[]
smalldatetime DateTime
smallint Int16
smallmoney Decimal
sql_variant Object
time TimeSpan
timestamp Byte[]
tinyint Byte
uniqueidentifier Guid
varbinary Byte[]
xml String
7 Note
For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data with
precision larger than 28, consider converting to a string in SQL query.
1. Store the Column Master Key (CMK) in an Azure Key Vault. Learn more on how to configure Always Encrypted by using Azure Key
Vault
2. Make sure to get access to the key vault where the Column Master Key (CMK) is stored. Refer to this article for required permissions.
3. Create linked service to connect to your SQL database and enable 'Always Encrypted' function by using either managed identity or
service principal.
7 Note
1. Either source or sink data stores is using managed identity or service principal as key provider authentication type.
2. Both source and sink data stores are using managed identity as key provider authentication type.
3. Both source and sink data stores are using the same service principal as key provider authentication type.
7 Note
Currently, Azure SQL Database Always Encrypted is only supported for source transformation in mapping data flows.
Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data
from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start
from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the
checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser
during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline.
At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now
on.
In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured
from the previous checkpoint of your selected pipeline run.
Example 1:
When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a
mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get
data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow
update or allow delete on target database. The example script in mapping dataflow is as below.
JSON
source(output(
id as integer,
name as string
),
allowSchemaDrift: true,
validateSchema: false,
enableNativeCdc: true,
netChanges: true,
skipInitialLoad: false,
isolationLevel: 'READ_UNCOMMITTED',
validateSchema: false,
deletable:true,
insertable:true,
updateable:true,
upsertable:true,
keys:['id'],
format: 'table',
skipDuplicateMapInputs: true,
skipDuplicateMapOutputs: true,
Example 2:
If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow
including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example
scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to
indicate deleted rows for downstream transforms to process the delta data.
JSON
source(output(
id as integer,
name as string
),
allowSchemaDrift: true,
validateSchema: false,
enableNativeCdc: true,
netChanges: true,
skipInitialLoad: false,
isolationLevel: 'READ_UNCOMMITTED',
validateSchema: false,
skipDuplicateMapInputs: true,
Known limitation:
Only net changes from SQL CDC will be loaded by ADF via cdc.fn_cdc_get_net_changes_.
Next steps
For a list of data stores supported as sources and sinks by the copy activity, see Supported data stores and formats.
Tutorial: Deploy an ASP.NET app to
Azure with Azure SQL Database
Article • 09/21/2022
Azure App Service provides a highly scalable, self-patching web hosting service. This
tutorial shows you how to deploy a data-driven ASP.NET app in App Service and
connect it to Azure SQL Database. When you're finished, you have an ASP.NET app
running in Azure and connected to SQL Database.
If you don't have an Azure subscription, create an Azure free account before you
begin.
Prerequisites
To complete this tutorial:
Install Visual Studio 2022 with the ASP.NET and web development and Azure
development workloads.
If you've installed Visual Studio already, add the workloads in Visual Studio by clicking
Tools > Get Tools and Features.
2. Type F5 to run the app. The app is displayed in your default browser.
7 Note
If you only installed Visual Studio and the prerequisites, you may have to
install missing packages via NuGet.
3. Select the Create New link and create a couple to-do items.
4. Test the Edit, Details, and Delete links.
The app uses a database context to connect with the database. In this sample, the
database context uses a connection string named MyDbConnection . The connection
string is set in the Web.config file and referenced in the Models/MyDatabaseContext.cs
file. The connection string name is used later in the tutorial to connect the Azure app to
an Azure SQL Database.
3. Make sure that Azure App Service (Windows) is selected and click Next.
7 Note
An App Service plan specifies the location, size, and features of the web server farm that
hosts your app. You can save money when you host multiple apps by configuring the
web apps to share a single App Service plan.
2. In the Configure App Service Plan dialog, configure the new App Service plan with
the following settings and click OK:
4. The Publish dialog shows the resources you've configured. Click Finish.
Create a server and database
Before creating a database, you need a logical SQL server. A logical SQL server is a
logical construct that contains a group of databases managed as a group.
1. In the Publish dialog, scroll down to the Service Dependencies section. Next to
SQL Server Database, click Configure.
7 Note
Be sure to configure the SQL Database from the Publish page instead of the
Connected Services page.
The server name is used as part of the default URL for your server,
<server_name>.database.windows.net . It must be unique across all servers in Azure
SQL. Change the server name to a value you want.
Remember this username and password. You need them to manage the server
later.
) Important
Even though your password in the connection strings is masked (in Visual
Studio and also in App Service), the fact that it's maintained somewhere adds
to the attack surface of your app. App Service can use managed service
identities to eliminate this risk by removing the need to maintain secrets in
your code or app configuration at all. For more information, see Next steps.
6. Click OK.
7. In the Azure SQL Database dialog, keep the default generated Database Name.
Select Create and wait for the database resources to be created.
2. In the Database connection string Name, type MyDbConnection. This name must
match the connection string that is referenced in Models/MyDatabaseContext.cs.
3. In Database connection user name and Database connection password, type the
administrator username and password you used in Create a server.
7 Note
If you see Local user secrets files instead, you must have configured SQL
Database from the Connected Services page instead of the Publish page.
5. Wait for configuration wizard to finish and click Close.
1. In the Publish tab, scroll back up to the top and click Publish. Once your ASP.NET
app is deployed to Azure. Your default browser is launched with the URL to the
deployed app.
2. At the top of SQL Server Object Explorer, click the Add SQL Server button.
2. Select the database that you created earlier. The connection you created earlier is
automatically filled at the bottom.
3. Type the database administrator password you created earlier and click Connect.
Allow client connection from your computer
The Create a new firewall rule dialog is opened. By default, a server only allows
connections to its databases from Azure services, such as your Azure app. To connect to
your database from outside of Azure, create a firewall rule at the server level. The
firewall rule allows the public IP address of your local computer.
Here, you can perform the most common database operations, such as run
queries, create views and stored procedures, and more.
2. Expand your connection > Databases > <your database> > Tables. Right-click on
the Todoes table and select View Data.
Update app with Code First Migrations
You can use the familiar tools in Visual Studio to update your database and app in
Azure. In this step, you use Code First Migrations in Entity Framework to make a change
to your database schema and publish it to Azure.
For more information about using Entity Framework Code First Migrations, see Getting
Started with Entity Framework 6 Code First using MVC 5.
Open Models\Todo.cs in the code editor. Add the following property to the ToDo class:
C#
1. From the Tools menu, click NuGet Package Manager > Package Manager
Console.
PowerShell
Enable-Migrations
3. Add a migration:
PowerShell
Add-Migration AddProperty
PowerShell
Update-Database
5. Type Ctrl+F5 to run the app. Test the edit, details, and create links.
If the application loads without errors, then Code First Migrations has succeeded.
However, your page still looks the same because your application logic is not using this
new property yet.
Make some changes in your code to use the Done property. For simplicity in this tutorial,
you're only going to change the Index and Create views to see the property in action.
1. Open Controllers\TodosController.cs.
2. Find the Create() method on line 52 and add Done to the list of properties in the
Bind attribute. When you're done, your Create() method signature looks like the
following code:
C#
public ActionResult Create([Bind(Include =
"Description,CreatedDate,Done")] Todo todo)
3. Open Views\Todos\Create.cshtml.
4. In the Razor code, you should see a <div class="form-group"> element that uses
model.Description , and then another <div class="form-group"> element that uses
C#
<div class="form-group">
<div class="col-md-10">
<div class="checkbox">
</div>
</div>
</div>
5. Open Views\Todos\Index.cshtml.
6. Search for the empty <th></th> element. Just above this element, add the
following Razor code:
C#
<th>
</th>
7. Find the <td> element that contains the Html.ActionLink() helper methods. Above
this <td> , add another <td> element with the following Razor code:
C#
<td>
</td>
That's all you need to see the changes in the Index and Create views.
8. Type Ctrl+F5 to run the app.
You can now add a to-do item and check Done. Then it should show up in your
homepage as a completed item. Remember that the Edit view doesn't show the Done
field, because you didn't change the Edit view.
Now that your code change works, including database migration, you publish it to your
Azure app and update your SQL Database with Code First Migrations too.
4. Select Execute Code First Migrations (runs on application start), then click Save.
Publish your changes
Now that you enabled Code First Migrations in your Azure app, publish your code
changes.
2. Try adding to-do items again and select Done, and they should show up in your
homepage as a completed item.
All your existing to-do items are still displayed. When you republish your ASP.NET
application, existing data in your SQL Database is not lost. Also, Code First Migrations
only changes the data schema and leaves your existing data intact.
Open Controllers\TodosController.cs.
Each action starts with a Trace.WriteLine() method. This code is added to show you
how to add trace messages to your Azure app.
However, you don't see any of the trace messages yet. That's because when you
first select View Streaming Logs, your Azure app sets the trace level to Error ,
which only logs error events (with the Trace.TraceError() method).
1. To change the trace levels to output other trace messages, go back to the publish
page.
3. In the portal management page for your app, from the left menu, select App
Service logs.
4. Under Application Logging (File System), select Verbose in Level. Click Save.
Tip
You can experiment with different trace levels to see what types of messages
are displayed for each level. For example, the Information level includes all
logs created by Trace.TraceInformation() , Trace.TraceWarning() , and
Trace.TraceError() , but not logs created by Trace.WriteLine() .
Console
To stop the log-streaming service, click the Stop monitoring button in the Output
window.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't
expect to need these resources in the future, you can delete them by deleting the
resource group.
1. From your web app's Overview page in the Azure portal, select the
myResourceGroup link under Resource group.
2. On the resource group page, make sure that the listed resources are the ones you
want to delete.
3. Select Delete, type myResourceGroup in the text box, and then select Delete.
Next steps
In this tutorial, you learned how to:
Advance to the next tutorial to learn how to easily improve the security of your
connection Azure SQL Database.
Tutorial: Connect to SQL Database from App Service without secrets using a
managed identity
More resources:
This article shows you how to use Azure Functions to create a scheduled job that
connects to an Azure SQL Database or Azure SQL Managed Instance. The function code
cleans up rows in a table in the database. The new C# function is created based on a
pre-defined timer trigger template in Visual Studio 2019. To support this scenario, you
must also set a database connection string as an app setting in the function app. For
Azure SQL Managed Instance you need to enable public endpoint to be able to connect
from Azure Functions. This scenario uses a bulk operation against the database.
If this is your first experience working with C# Functions, you should read the Azure
Functions C# developer reference.
Prerequisites
Complete the steps in the article Create your first function using Visual Studio to
create a local function app that targets version 2.x or a later version of the runtime.
You must also have published your project to a function app in Azure.
You must add a server-level firewall rule for the public IP address of the computer
you use for this quickstart. This rule is required to be able access the SQL Database
instance from your local computer.
2. Select SQL Databases from the left-hand menu, and select your database on the
SQL databases page.
3. Select Connection strings under Settings and copy the complete ADO.NET
connection string. For Azure SQL Managed Instance copy connection string for
public endpoint.
You must have previously published your app to Azure. If you haven't already done so,
Publish your function app to Azure.
1. In Solution Explorer, right-click the function app project and choose Publish.
2. On the Publish page, select the ellipses ( ... ) in the Hosting area, and choose
Manage Azure App Service settings.
3. In Application Settings select Add setting, in New app setting name type
sqldb_connection , and select OK.
4. In the new sqldb_connection setting, paste the connection string you copied in
the previous section into the Local field and replace {your_username} and
{your_password} placeholders with real values. Select Insert value from local to
copy the updated value into the Remote field, and then select OK.
The connection strings are stored encrypted in Azure (Remote). To prevent leaking
secrets, the local.settings.json project file (Local) should be excluded from source
control, such as by using a .gitignore file.
2. In Solution Explorer, right-click the function app project and choose Manage
NuGet Packages.
3. On the Browse tab, search for Microsoft.Data.SqlClient and, when found, select
it.
4. In the Microsoft.Data.SqlClient page, select version 5.1.0 and then click Install.
5. When the install completes, review the changes and then click OK to close the
Preview window.
Now, you can add the C# function code that connects to your SQL Database.
2. With the Azure Functions template selected, name the new item something like
DatabaseCleanup.cs and select Add.
3. In the New Azure function dialog box, choose Timer trigger and then Add. This
dialog creates a code file for the timer triggered function.
4. Open the new code file and add the following using statements at the top of the
file:
C#
using Microsoft.Data.SqlClient;
using System.Threading.Tasks;
C#
[FunctionName("DatabaseCleanup")]
// Get the connection string from app settings and use it to create
a connection.
conn.Open();
This function runs every 15 seconds to update the Status column based on the
ship date. To learn more about the Timer trigger, see Timer trigger for Azure
Functions.
6. Press F5 to start the function app. The Azure Functions Core Tools execution
window opens behind Visual Studio.
7. At 15 seconds after startup, the function runs. Watch the output and note the
number of rows updated in the SalesOrderHeader table.
On the first execution, you should update 32 rows of data. Following runs update
no data rows, unless you make changes to the SalesOrderHeader table data so that
more rows are selected by the UPDATE statement.
If you plan to publish this function, remember to change the TimerTrigger attribute to a
more reasonable cron schedule than every 15 seconds. You also need to make sure that
your function app can access the Azure SQL Database or Azure SQL Managed Instance.
For more information, see one of the following links based on your type of Azure SQL:
Next steps
Next, learn how to use. Functions with Logic Apps to integrate with other services.
Create a function that integrates with Logic Apps
Programmer reference for coding functions and defining triggers and bindings.
Testing Azure Functions
This how-to guide shows how to access your SQL database from a workflow in Azure
Logic Apps with the SQL Server connector. You can then create automated workflows
that run when triggered by events in your SQL database or in other systems and run
actions to manage your SQL data and resources.
For example, your workflow can run actions that get, insert, and delete data or that can
run SQL queries and stored procedures. Your workflow can check for new records in a
non-SQL database, do some processing work, use the results to create new records in
your SQL database, and send email alerts about the new records.
If you're new to Azure Logic Apps, review the following get started documentation:
Create an example Standard logic app workflow in single-tenant Azure Logic Apps
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Consumption Multi-tenant Azure Managed connector, which appears in the designer under
Logic Apps the Standard label. For more information, review the
following documentation:
Consumption Integration service Managed connector, which appears in the designer under
environment (ISE) the Standard label, and the ISE version, which has
different message limits than the Standard class. For more
information, review the following documentation:
Standard Single-tenant Azure Managed connector, which appears in the designer under
Logic Apps and App the Azure label, and built-in connector, which appears in
Service Environment the designer under the Built-in label and is service
v3 (Windows plans provider based. The built-in version differs in the following
only) ways:
Limitations
For more information, review the SQL Server managed connector reference or the SQL
Server built-in connector reference.
Prerequisites
An Azure account and subscription. If you don't have a subscription, sign up for a
free Azure account .
The information required to create an SQL database connection, such as your SQL
server and database name. If you're using Windows Authentication or SQL Server
Authentication to authenticate access, you also need your user name and
password. You can usually find this information in the connection string.
) Important
If you use an SQL Server connection string that you copied directly from the
Azure portal,
you have to manually add your password to the connection
string.
For an SQL database in Azure, the connection string has the following format:
Server=tcp:{your-server-name}.database.windows.net,1433;Initial Catalog=
{your-database-name};Persist Security Info=False;User ID={your-user-
name};Password={your-
password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificat
e=False;Connection Timeout=30;
For an on-premises SQL server, the connection string has the following format:
Server={your-server-address};Database={your-database-name};User Id={your-
user-name};Password={your-password};
The logic app workflow where you want to access your SQL database. To start your
workflow with a SQL Server trigger, you have to start with a blank workflow. To use
a SQL Server action, start your workflow with any trigger.
Consumption workflow
In multi-tenant Azure Logic Apps, you need the on-premises data gateway
installed on a local computer and a data gateway resource that's already
created in Azure.
In an ISE, you don't need the on-premises data gateway for SQL Server
Authentication and non-Windows Authentication connections, and you can
use the ISE-versioned SQL Server connector. For Windows Authentication,
you need the on-premises data gateway on a local computer and a data
gateway resource that's already created in Azure. The ISE-version connector
doesn't support Windows Authentication, so you have to use the regular SQL
Server managed connector.
Standard workflow
You can use the SQL Server built-in connector or managed connector.
To use the built-in connector, you can authenticate your connection with
either a managed identity, Azure Active Directory, or a connection string. You
can adjust connection pooling by specifying parameters in the connection
string. For more information, review Connection Pooling.
To use the SQL Server managed connector, follow the same requirements as
a Consumption logic app workflow in multi-tenant Azure Logic Apps. For
other connector requirements, review the SQL Server managed connector
reference.
Consumption
1. In the Azure portal , open your Consumption logic app and blank workflow
in the designer.
2. In the designer, under the search box, select Standard. Then, follow these
general steps to add the SQL Server managed trigger you want.
This example continues with the trigger named When an item is created.
3. If prompted, provide the information for your connection. When you're done,
select Create.
4. After the trigger information box appears, provide the necessary information
required by your selected trigger.
For this example, in the trigger named When an item is created, provide the
values for the SQL server name and database name, if you didn't previously
provide them. Otherwise, from the Table name list, select the table that you
want to use. Select the Frequency and Interval to set the schedule for the
trigger to check for new items.
5. If any other properties are available for this trigger, open the Add new
parameter list, and select those properties relevant to your scenario.
This trigger returns only one row from the selected table, and nothing else. To
perform other tasks, continue by adding either a SQL Server connector action
or another action that performs the next task that you want in your logic app
workflow.
For example, to view the data in this row, you can add other actions that
create a file that includes the fields from the returned row, and then send
email alerts. To learn about other available actions for this connector, see the
SQL Server managed connector reference.
6. When you're done, save your workflow. On the designer toolbar, select Save.
When you save your workflow, this step automatically publishes your updates to your
deployed logic app, which is live in Azure. With only a trigger, your workflow just checks
the SQL database based on your specified schedule. You have to add an action that
responds to the trigger.
In this example, the logic app workflow starts with the Recurrence trigger, and calls an
action that gets a row from an SQL database.
Consumption
1. In the Azure portal , open your Consumption logic app and workflow in the
designer.
2. In the designer, follow these general steps to add the SQL Server managed
action you want.
This example continues with the action named Get row, which gets a single
record.
3. If prompted, provide the information for your connection. When you're done,
select Create.
4. After the action information box appears, from the Table name list, select the
table that you want to use. In the Row id property, enter the ID for the record
that you want.
For this example, the table name is SalesLT.Customer.
This action returns only one row from the selected table, and nothing else. To
view the data in this row, add other actions. For example, such actions might
create a file, include the fields from the returned row, and store the file in a
cloud storage account. To learn about other available actions for this
connector, see the connector's reference page.
5. When you're done, save your workflow. On the designer toolbar, select Save.
After you provide this information, continue with the following steps based on your
target database:
2. For Authentication type, select the authentication that's required and enabled on
your database in Azure SQL Database or SQL Managed Instance:
Authentication Description
Connection - Supported only in Standard workflows with the SQL Server built-in
string connector.
Active - Supported only in Standard workflows with the SQL Server built-in
Directory connector. For more information, see the following documentation:
OAuth
- Authentication for SQL Server connector
Logic Apps - Supported with the SQL Server managed connector and ISE-versioned
Managed connector. In Standard workflows, this authentication type is available for
Identity the SQL Server built-in connector, but the option is named Managed
identity instead.
--- A valid managed identity that's enabled on your logic app resource
and has access to your database.
--- Contributor access to the resource group that includes the SQL
Server resource.
principal
(Azure AD - Requires an Azure AD application and service principal. For more
application) information, see Create an Azure AD application and service principal
that can access resources using the Azure portal.
Azure AD - Supported with the SQL Server managed connector and ISE-versioned
Integrated connector.
SQL Server - Supported with the SQL Server managed connector and ISE-versioned
Authentication connector.
--- A data gateway resource that's previously created in Azure for your
connection, regardless whether your logic app is in multi-tenant Azure
Logic Apps or an ISE.
--- A valid user name and strong password that are created and stored in
your SQL Server database. For more information, see the following
topics:
The following examples show how the connection information box might appear if
you use the SQL Server managed connector and select Azure AD Integrated
authentication:
Consumption workflows
Standard workflows
3. After you select Azure AD Integrated, select Sign in. Based on whether you use
Azure SQL Database or SQL Managed Instance, select your user credentials for
authentication.
Server Yes The address for your SQL server, for example, Fabrikam-Azure-
name SQL.database.windows.net
Database Yes The name for your SQL database, for example, Fabrikam-Azure-
name SQL-DB
Table Yes The table that you want to use, for example, SalesLT.Customer
name
Tip
To provide your database and table information, you have these options:
Server=tcp:{your-server-address}.database.windows.net,1433;Initial
{your-user-name};Password={your-
password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCer
tificate=False;Connection Timeout=30;
By default, tables in system databases are filtered out, so they might not
automatically appear when you select a system database. As an
alternative, you can manually enter the table name after you select Enter
custom value from the database list.
Consumption workflows
Standard workflows
5. Now, continue with the steps that you haven't completed yet in either Add a SQL
trigger or Add a SQL action.
1. For connections to your on-premises SQL server that require the on-premises data
gateway, make sure that you've completed these prerequisites.
Otherwise, your data gateway resource doesn't appear in the Connection Gateway
list when you create your connection.
2. For Authentication Type, select the authentication that's required and enabled on
your SQL Server:
Authentication Description
Authentication Description
SQL Server - Supported with the SQL Server managed connector, SQL Server built-in
Authentication connector, and ISE-versioned connector.
--- A data gateway resource that's previously created in Azure for your
connection, regardless whether your logic app is in multi-tenant Azure
Logic Apps or an ISE.
--- A valid user name and strong password that are created and stored in
your SQL Server.
Authentication
- Requires the following items:
--- A data gateway resource that's previously created in Azure for your
connection, regardless whether your logic app is in multi-tenant Azure
Logic Apps or an ISE.
--- A valid Windows user name and password to confirm your identity
through your Windows account.
SQL server Yes The address for your SQL server, for example,
name Fabrikam-Azure-SQL.database.windows.net
SQL Yes The name for your SQL Server database, for example,
database Fabrikam-Azure-SQL-DB
name
Username Yes Your user name for the SQL server and database
Password Yes Your password for the SQL server and database
Subscription Yes, for Windows The Azure subscription for the data gateway resource
authentication that you previously created in Azure
Property Required Description
Connection Yes, for Windows The name for the data gateway resource that you
Gateway authentication previously created in Azure
Tip
Server={your-server-address}
Database={your-database-name}
User ID={your-user-name}
Password={your-password}
The following examples show how the connection information box might appear if
you select Windows authentication.
Consumption workflows
Standard workflows
4. When you're ready, select Create.
5. Now, continue with the steps that you haven't completed yet in either Add a SQL
trigger or Add a SQL action.
To help you manage results as smaller sets, turn on pagination. For more
information, see Get bulk data, records, and items by using pagination. For more
information, see SQL Pagination for bulk data transfer with Logic Apps .
Create a stored procedure that organizes the results the way that you want. The
SQL Server connector provides many backend features that you can access by
using Azure Logic Apps so that you can more easily automate business tasks that
work with SQL database tables.
When a SQL action gets or inserts multiple rows, your logic app workflow can
iterate through these rows by using an until loop within these limits. However,
when your logic app has to work with record sets so large, for example, thousands
or millions of rows, that you want to minimize the costs resulting from calls to the
database.
To organize the results in the way that you want, you can create a stored
procedure that runs in your SQL instance and uses the SELECT - ORDER BY
statement. This solution gives you more control over the size and structure of your
results. Your logic app calls the stored procedure by using the SQL Server
connector's Execute stored procedure action. For more information, see SELECT -
ORDER BY Clause.
7 Note
The SQL Server connector has a stored procedure timeout limit that's less
than 2 minutes.
Some stored procedures might take longer than this limit to
complete, causing a 504 Timeout error. You can work around this problem
by
using a SQL completion trigger, native SQL pass-through query, a state table,
and server-side jobs.
For this task, you can use the Azure Elastic Job Agent
for Azure SQL
Database. For
SQL Server on premises
and SQL Managed Instance,
you can
use the SQL Server Agent. To learn more, see
Handle long-running stored
procedure timeouts in the SQL Server connector for Azure Logic Apps.
1. In the Azure portal , open your logic app and workflow in the designer.
2. View the output format by performing a test run. Copy and save your sample
output.
3. In the designer, under the action where you call the stored procedure, add the
built-in action named Parse JSON.
4. In the Parse JSON action, select Use sample payload to generate schema.
5. In the Enter or paste a sample JSON payload box, paste your sample output, and
select Done.
7 Note
If you get an error that Azure Logic Apps can't generate a schema, check that
your
sample output's syntax is correctly formatted. If you still can't generate
the schema,
in the Schema box, manually enter the schema.
7. To reference the JSON content properties, select inside the edit boxes where you
want to reference those properties so that the dynamic content list appears. In the
list, under the Parse JSON heading, select the data tokens for the JSON content
properties that you want.
Next steps
Managed connectors for Azure Logic Apps
Built-in connectors for Azure Logic Apps
Index data from Azure SQL
Article • 01/19/2023
In this article, learn how to configure an indexer that imports content from Azure SQL
Database or an Azure SQL managed instance and makes it searchable in Azure
Cognitive Search.
This article supplements Create an indexer with information that's specific to Azure SQL.
It uses the REST APIs to demonstrate a three-part workflow common to all indexers:
create a data source, create an index, create an indexer.
A description of the change detection policies supported by the Azure SQL indexer
so that you can set up incremental indexing.
7 Note
Prerequisites
An Azure SQL database with data in a single table or view.
Use a table if your data is large or if you need incremental indexing using SQL's
native change detection capabilities.
Use a view if you need to consolidate data from multiple tables. Large views aren't
ideal for SQL indexer. A workaround is to create a new table just for ingestion into
your Cognitive Search index. You'll be able to use SQL integrated change tracking,
which is easier to implement than High Water Mark.
To work through the examples in this article, you'll need a REST client, such as Postman.
Other approaches for creating an Azure SQL indexer include Azure SDKs or Import data
wizard in the Azure portal. If you're using Azure portal, make sure that access to all
public networks is enabled in the Azure SQL firewall and that the client has access via an
inbound rule.
HTTP
POST https://myservice.search.windows.net/datasources?api-
version=2020-06-30
Content-Type: application/json
api-key: admin-key
"name" : "myazuresqldatasource",
"type" : "azuresql",
"container" : {
},
"dataChangeDetectionPolicy": null,
"dataDeletionDetectionPolicy": null,
"encryptionKey": null,
"identity": null
2. Provide a unique name for the data source that follows Azure Cognitive Search
naming conventions.
For more information, see Connect to Azure SQL Database indexer using a
managed identity.
1. Create or update an index to define search fields that will store data:
HTTP
Content-Type: application/json
"name": "mysearchindex",
"fields": [{
"name": "id",
"type": "Edm.String",
"key": true,
"searchable": false
},
"name": "description",
"type": "Edm.String",
"filterable": false,
"searchable": true,
"sortable": false,
"facetable": false,
"suggestions": true
2. Create a document key field ("key": true) that uniquely identifies each search
document. This is the only field that's required in a search index. Typically, the
table's primary key is mapped to the index key field. The document key must be
unique and non-null. The values can be numeric in source data, but in a search
index, a key is always a string.
3. Create more fields to add more searchable content. See Create an index for
guidance.
bit Edm.Boolean,
Edm.String
uniqueidentifer Edm.String
1. Create or update an indexer by giving it a name and referencing the data source
and target index:
HTTP
Content-Type: application/json
"name" : "[my-sqldb-indexer]",
"dataSourceName" : "[my-sqldb-ds]",
"targetIndexName" : "[my-search-index]",
"disabled": null,
"schedule": null,
"parameters": {
"batchSize": null,
"maxFailedItems": 0,
"maxFailedItemsPerBatch": 0,
"base64EncodeKeys": false,
"configuration": {
"queryTimeout": "00:04:00",
"convertHighWaterMarkToRowVersion": false,
"disableOrderByHighWaterMarkColumn": false
},
"fieldMappings": [],
"encryptionKey": null
2. Under parameters, the configuration section has parameters that are specific to
Azure SQL:
Default query timeout for SQL query execution is 5 minutes, which you can
override.
"convertHighWaterMarkToRowVersion" optimizes for the High Water Mark
change detection policy. Change detection policies are set in the data source.
If you're using the native change detection policy, this parameter has no
effect.
3. Specify field mappings if there are differences in field name or type, or if you need
multiple versions of a source field in the search index.
An indexer runs automatically when it's created. You can prevent this by setting
"disabled" to true. To control indexer execution, run an indexer on demand or put it on a
schedule.
HTTP
GET https://myservice.search.windows.net/indexers/myindexer/status?api-
version=2020-06-30
Content-Type: application/json
The response includes status and the number of items processed. It should look similar
to the following example:
JSON
"status":"running",
"lastResult": {
"status":"success",
"errorMessage":null,
"startTime":"2022-02-21T00:23:24.957Z",
"endTime":"2022-02-21T00:36:47.752Z",
"errors":[],
"itemsProcessed":1599501,
"itemsFailed":0,
"initialTrackingState":null,
"finalTrackingState":null
},
"executionHistory":
"status":"success",
"errorMessage":null,
"startTime":"2022-02-21T00:23:24.957Z",
"endTime":"2022-02-21T00:36:47.752Z",
"errors":[],
"itemsProcessed":1599501,
"itemsFailed":0,
"initialTrackingState":null,
"finalTrackingState":null
},
For Azure SQL indexers, there are two change detection policies:
Database requirements:
SQL Server 2012 SP3 and later, if you're using SQL Server on Azure VMs
Azure SQL Database or SQL Managed Instance
Tables only (no views)
On the database, enable change tracking for the table
No composite primary key (a primary key containing more than one column) on
the table
No clustered indexes on the table. As a workaround, any clustered index would
have to be dropped and re-created as nonclustered index, however, performance
may be affected in the source compared to having a clustered index
Change detection policies are added to data source definitions. To use this policy, create
or update your data source like this:
HTTP
POST https://myservice.search.windows.net/datasources?api-version=2020-06-30
Content-Type: application/json
api-key: admin-key
"name" : "myazuresqldatasource",
"type" : "azuresql",
"dataChangeDetectionPolicy" : {
"@odata.type" :
"#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy"
When using SQL integrated change tracking policy, don't specify a separate data
deletion detection policy. The SQL integrated change tracking policy has built-in
support for identifying deleted rows. However, for the deleted rows to be detected
automatically, the document key in your search index must be the same as the primary
key in the SQL table.
7 Note
When using TRUNCATE TABLE to remove a large number of rows from a SQL table,
the indexer needs to be reset to reset the change tracking state to pick up row
deletions.
The high water mark column must meet the following requirements:
All inserts specify a value for the column.
All updates to an item also change the value of the column.
The value of this column increases with each insert or update.
Queries with the following WHERE and ORDER BY clauses can be executed
efficiently: WHERE [High Water Mark Column] > [Current High Water Mark Value]
ORDER BY [High Water Mark Column]
7 Note
We strongly recommend using the rowversion data type for the high water mark
column. If any other data type is used, change tracking isn't guaranteed to capture
all changes in the presence of transactions executing concurrently with an indexer
query. When using rowversion in a configuration with read-only replicas, you must
point the indexer at the primary replica. Only a primary replica can be used for data
sync scenarios.
Change detection policies are added to data source definitions. To use this policy, create
or update your data source like this:
HTTP
POST https://myservice.search.windows.net/datasources?api-version=2020-06-30
Content-Type: application/json
api-key: admin-key
"name" : "myazuresqldatasource",
"type" : "azuresql",
"dataChangeDetectionPolicy" : {
"@odata.type" :
"#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
7 Note
If the source table doesn't have an index on the high water mark column, queries
used by the SQL indexer may time out. In particular, the ORDER BY [High Water Mark
Column] clause requires an index to run efficiently when the table contains many
rows.
convertHighWaterMarkToRowVersion
If you're using a rowversion data type for the high water mark column, consider setting
the convertHighWaterMarkToRowVersion property in indexer configuration. Setting this
property to true results in the following behaviors:
Uses the rowversion data type for the high water mark column in the indexer SQL
query. Using the correct data type improves indexer query performance.
Subtracts one from the rowversion value before the indexer query runs. Views with
one-to-many joins may have rows with duplicate rowversion values. Subtracting
one ensures the indexer query doesn't miss these rows.
To enable this property, create or update the indexer with the following configuration:
HTTP
"parameters" : {
queryTimeout
If you encounter timeout errors, set the queryTimeout indexer configuration setting to a
value higher than the default 5-minute timeout. For example, to set the timeout to 10
minutes, create or update the indexer with the following configuration:
HTTP
"parameters" : {
disableOrderByHighWaterMarkColumn
You can also disable the ORDER BY [High Water Mark Column] clause. However, this isn't
recommended because if the indexer execution is interrupted by an error, the indexer
has to re-process all rows if it runs later, even if the indexer has already processed
almost all the rows at the time it was interrupted. To disable the ORDER BY clause, use
the disableOrderByHighWaterMarkColumn setting in the indexer definition:
HTTP
"parameters" : {
If the rows are physically removed from the table, Azure Cognitive Search has no way to
infer the presence of records that no longer exist. However, you can use the “soft-
delete” technique to logically delete rows without removing them from the table. Add a
column to your table or view and mark rows as deleted using that column.
When using the soft-delete technique, you can specify the soft delete policy as follows
when creating or updating the data source:
HTTP
…,
"dataDeletionDetectionPolicy" : {
"@odata.type" :
"#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy",
FAQ
Q: Can I index Always Encrypted columns?
No. Always Encrypted columns aren't currently supported by Cognitive Search indexers.
Q: Can I use Azure SQL indexer with SQL databases running on IaaS VMs in Azure?
Yes. However, you need to allow your search service to connect to your database. For
more information, see Configure a connection from an Azure Cognitive Search indexer
to SQL Server on an Azure VM.
Q: Can I use Azure SQL indexer with SQL databases running on-premises?
It depends. For full indexing of a table or view, you can use a secondary replica.
For incremental indexing, Azure Cognitive Search supports two change detection
policies: SQL integrated change tracking and High Water Mark.
Our standard recommendation is to use the rowversion data type for the high water
mark column. However, using rowversion relies on the MIN_ACTIVE_ROWVERSION function,
which isn't supported on read-only replicas. Therefore, you must point the indexer to a
primary replica if you're using rowversion.
If you attempt to use rowversion on a read-only replica, you'll see the following error:
"Using a rowversion column for change tracking isn't supported on secondary (read-
only) availability replicas. Please update the datasource and specify a connection to the
primary availability replica. Current database 'Updateability' property is 'READ_ONLY'".
Q: Can I use an alternative, non-rowversion column for high water mark change
tracking?
It's not recommended. Only rowversion allows for reliable data synchronization.
However, depending on your application logic, it may be safe if:
You can ensure that when the indexer runs, there are no outstanding transactions
on the table that’s being indexed (for example, all table updates happen as a batch
on a schedule, and the Azure Cognitive Search indexer schedule is set to avoid
overlapping with the table update schedule).
Applies to:
SQL Server
Azure SQL Managed Instance
Microsoft SQL Server and Azure SQL Managed Instance enable you to implement some
of the functionalities with .NET languages using the native common language runtime
(CLR) integration as SQL Server server-side modules (procedures, functions, and
triggers). The CLR supplies managed code with services such as cross-language
integration, code access security, object lifetime management, and debugging and
profiling support. For SQL Server users and application developers, CLR integration
means that you can now write stored procedures, triggers, user-defined types, user-
defined functions (scalar and table valued), and user-defined aggregate functions using
any .NET Framework language, including Microsoft Visual Basic .NET and Microsoft
Visual C#. SQL Server includes the .NET Framework version 4 pre-installed.
2 Warning
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer
supported as a security boundary. A CLR assembly created with PERMISSION_SET =
SAFE may be able to access external system resources, call unmanaged code, and
security of CLR assemblies. clr strict security is enabled by default, and treats
SAFE and EXTERNAL_ACCESS assemblies as if they were marked UNSAFE . The clr
strict security option can be disabled for backward compatibility, but this is not
This 6-minute video shows you how to use CLR in Azure SQL Managed Instance:
https://channel9.msdn.com/Shows/Data-Exposed/Its-just-SQL-CLR-in-Azure-SQL-
Database-Managed-Instance/player?WT.mc_id=dataexposed-c9-
niner&nocookie=true&locale=en-us&embedUrl=%2Fsql%2Frelational-
databases%2Fclr-integration%2Fcommon-language-runtime-integration-overview
When to use CLR modules
CLR Integration enables you to implement complex features that are available in .NET
Framework such as regular expressions, code for accessing external resources (servers,
web services, databases), custom encryption, etc. Some of the benefits of the server-side
CLR integration are:
Improved safety and security. Managed code runs in a common language run-
time environment, hosted by the Database Engine. SQL Server leverages this to
provide a safer and more secure alternative to the extended stored procedures
available in earlier versions of SQL Server.
Ability to define data types and aggregate functions. User-defined types and
user-defined aggregates are two new managed database objects that expand the
storage and querying capabilities of SQL Server.
Potential for improved performance and scalability. In many situations, the .NET
Framework language compilation and execution models deliver improved
performance over Transact-SQL.
Describes the kinds of objects that can be built using CLR integration. Also reviews the
requirements for building database objects using CLR integration.
What's New in CLR Integration
See Also
Installing the .NET Framework (SQL Server only)
This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JDBC .
In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.
SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.
Prerequisites
An Azure subscription - create one for free .
Apache Maven .
Azure CLI.
sqlcmd Utility.
If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JDBC, and MS SQL Server Driver dependencies, and
then select Java version 8 or higher.
To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.
If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.
Passwordless (Recommended)
1. First, install the Service Connector passwordless extension for the Azure CLI:
Azure CLI
az extension add --name serviceconnector-passwordless --upgrade
2. Then, use the following command to create the Azure AD non-admin user:
Azure CLI
--resource-group <your-resource-group-name> \
--connection sql_conn \
--target-resource-group <your-resource-group-name> \
--server sqlservertest \
--database demo \
--user-account \
--query authInfo.userName \
--output tsv
The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.
) Important
To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:
XML
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-dependencies</artifactId>
<version>4.9.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
7 Note
XML
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter</artifactId>
</dependency>
Passwordless (Recommended)
properties
logging.level.org.springframework.jdbc.core=DEBUG
spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;
spring.sql.init.mode=always
2 Warning
SQL
3. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by Spring Boot. The following code ignores
the getters and setters methods.
Java
import org.springframework.data.annotation.Id;
public Todo() {
this.description = description;
this.details = details;
this.done = done;
@Id
Java
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.data.repository.CrudRepository;
import java.util.stream.Stream;
@SpringBootApplication
SpringApplication.run(DemoApplication.class, args);
@Bean
ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {
return event->repository
.forEach(System.out::println);
Tip
5. Start the application. The application stores data into the database. You'll see logs
similar to the following example:
shell
com.example.demo.Todo@4bdb04c8
Next steps
Azure for Spring developers
Use Spring Data JPA with Azure SQL
Database
Article • 04/19/2023
This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JPA .
The Java Persistence API (JPA) is the standard Java API for object-relational mapping.
In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.
SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.
Prerequisites
An Azure subscription - create one for free .
Apache Maven .
Azure CLI.
sqlcmd Utility
If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JPA, and MS SQL Server Driver dependencies, and then
select Java version 8 or higher.
) Important
To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.
If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.
Passwordless (Recommended)
To use passwordless connections, see Tutorial: Secure a database in Azure SQL
Database or use Service Connector to create an Azure AD admin user for your
Azure SQL Database server, as shown in the following steps:
1. First, install the Service Connector passwordless extension for the Azure CLI:
Azure CLI
2. Then, use the following command to create the Azure AD non-admin user:
Azure CLI
--resource-group <your-resource-group-name> \
--connection sql_conn \
--target-resource-group <your-resource-group-name> \
--server sqlservertest \
--database demo \
--user-account \
--query authInfo.userName \
--output tsv
The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.
) Important
To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-dependencies</artifactId>
<version>4.9.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
7 Note
XML
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter</artifactId>
</dependency>
Passwordless (Recommended)
properties
logging.level.org.hibernate.SQL=DEBUG
spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLSe
rver2016Dialect
spring.jpa.hibernate.ddl-auto=create-drop
2 Warning
2. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by JPA. The following code ignores the
getters and setters methods.
Java
package com.example.demo;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
@Entity
public Todo() {
this.description = description;
this.details = details;
this.done = done;
@Id
@GeneratedValue
Java
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.data.jpa.repository.JpaRepository;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@SpringBootApplication
SpringApplication.run(DemoApplication.class, args);
@Bean
ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {
return event->repository
.forEach(System.out::println);
Tip
determines which method to use at runtime. This approach enables your app
to use different authentication methods in different environments (such as
local and production environments) without implementing environment-
specific code. For more information, see the Default Azure credential section
of Authenticate Azure-hosted Java applications.
4. Start the application. You'll see logs similar to the following example:
shell
com.example.demo.Todo@1f
Next steps
Azure for Spring developers
Use Spring Data R2DBC with Azure SQL
Database
Article • 05/26/2023
This article demonstrates creating a sample application that uses Spring Data R2DBC
to store and retrieve information in Azure SQL Database by using the R2DBC
implementation for Microsoft SQL Server from the r2dbc-mssql GitHub repository .
R2DBC brings reactive APIs to traditional relational databases. You can use it with
Spring WebFlux to create fully reactive Spring Boot applications that use non-blocking
APIs. It provides better scalability than the classic "one thread per connection" approach.
Prerequisites
An Azure subscription - create one for free .
Apache Maven .
Azure CLI.
sqlcmd Utility.
Bash
export AZ_RESOURCE_GROUP=database-workshop
export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
export AZ_LOCATION=<YOUR_AZURE_REGION>
export AZ_SQL_SERVER_ADMIN_USERNAME=spring
export AZ_SQL_SERVER_ADMIN_PASSWORD=<YOUR_AZURE_SQL_ADMIN_PASSWORD>
export AZ_SQL_SERVER_NON_ADMIN_USERNAME=nonspring
export AZ_SQL_SERVER_NON_ADMIN_PASSWORD=<YOUR_AZURE_SQL_NON_ADMIN_PASSWORD>
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this
article:
<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server, which should
but we recommend that you configure a region closer to where you live. You can
see the full list of available regions by using az account list-locations .
<AZ_SQL_SERVER_ADMIN_PASSWORD> and <AZ_SQL_SERVER_NON_ADMIN_PASSWORD> : The
password of your Azure SQL Database server, which should have a minimum of
eight characters. The characters should be from three of the following categories:
English uppercase letters, English lowercase letters, numbers (0-9), and non-
alphanumeric characters (!, $, #, %, and so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll
run your Spring Boot application. One convenient way to find it is to open
whatismyip.akamai.com .
Azure CLI
az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
--output tsv
7 Note
The MS SQL password has to meet specific criteria, and setup will fail with a non-
compliant password. For more information, see Password Policy.
Azure CLI
az sql server create \
--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME \
--location $AZ_LOCATION \
--admin-user $AZ_SQL_SERVER_ADMIN_USERNAME \
--admin-password $AZ_SQL_SERVER_ADMIN_PASSWORD \
--output tsv
Because you configured your local IP address at the beginning of this article, you can
open the server's firewall by running the following command:
Azure CLI
If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.
Obtain the IP address of your host machine by running the following command in WSL:
Bash
cat /etc/resolv.conf
Copy the IP address following the term nameserver , then use the following command to
set an environment variable for the WSL IP Address:
Bash
export AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
Then, use the following command to open the server's firewall to your WSL-based app:
Azure CLI
Azure CLI
az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
--output tsv
Create a SQL script called create_user.sql for creating a non-admin user. Add the
following contents and save it locally:
Bash
Then, use the following command to run the SQL script to create the non-admin user:
Bash
7 Note
For more information about creating SQL database users, see CREATE USER
(Transact-SQL).
Bash
XML
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-mssql</artifactId>
<scope>runtime</scope>
</dependency>
properties
logging.level.org.springframework.data.r2dbc=DEBUG
spring.r2dbc.url=r2dbc:pool:mssql://$AZ_DATABASE_NAME.database.windows.net:1
433/demo
spring.r2dbc.username=nonspring@$AZ_DATABASE_NAME
spring.r2dbc.password=$AZ_SQL_SERVER_NON_ADMIN_PASSWORD
7 Note
You should now be able to start your application by using the provided Maven wrapper
as follows:
Bash
./mvnw spring-boot:run
Java
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.core.io.ClassPathResource;
import
org.springframework.data.r2dbc.connectionfactory.init.ConnectionFactoryIniti
alizer;
import
org.springframework.data.r2dbc.connectionfactory.init.ResourceDatabasePopula
tor;
import io.r2dbc.spi.ConnectionFactory;
@SpringBootApplication
public class DemoApplication {
@Bean
public ConnectionFactoryInitializer initializer(ConnectionFactory
connectionFactory) {
ConnectionFactoryInitializer initializer = new
ConnectionFactoryInitializer();
initializer.setConnectionFactory(connectionFactory);
ResourceDatabasePopulator populator = new
ResourceDatabasePopulator(new ClassPathResource("schema.sql"));
initializer.setDatabasePopulator(populator);
return initializer;
}
}
This Spring bean uses a file called schema.sql, so create that file in the
src/main/resources folder, and add the following text:
SQL
Stop the running application, and start it again using the following command. The
application will now use the demo database that you created earlier, and create a todo
table inside it.
Bash
./mvnw spring-boot:run
Create a new Todo Java class, next to the DemoApplication class, using the following
code:
Java
package com.example.demo;
import org.springframework.data.annotation.Id;
public Todo() {
}
@Id
private Long id;
This class is a domain model mapped on the todo table that you created before.
To manage that class, you need a repository. Define a new TodoRepository interface in
the same package, using the following code:
Java
package com.example.demo;
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
Finish the application by creating a controller that can store and retrieve data.
Implement a TodoController class in the same package, and add the following code:
Java
package com.example.demo;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
@RestController
@RequestMapping("/")
public class TodoController {
@PostMapping("/")
@ResponseStatus(HttpStatus.CREATED)
public Mono<Todo> createTodo(@RequestBody Todo todo) {
return todoRepository.save(todo);
}
@GetMapping("/")
public Flux<Todo> getTodos() {
return todoRepository.findAll();
}
}
Finally, halt the application and start it again using the following command:
Bash
./mvnw spring-boot:run
Test the application
To test the application, you can use cURL.
First, create a new "todo" item in the database using the following command:
Bash
JSON
Next, retrieve the data by using a new cURL request with the following command:
Bash
curl http://127.0.0.1:8080
This command will return the list of "todo" items, including the item you've created, as
shown here:
JSON
Congratulations! You've created a fully reactive Spring Boot application that uses R2DBC
to store and retrieve data from Azure SQL Database.
Clean up resources
To clean up all resources used during this quickstart, delete the resource group by using
the following command:
Azure CLI
az group delete \
--name $AZ_RESOURCE_GROUP \
--yes
Next steps
To learn more about deploying a Spring Data application to Azure Spring Apps and
using managed identity, see Tutorial: Deploy a Spring application to Azure Spring Apps
with a passwordless connection to an Azure database.
To learn more about Spring and Azure, continue to the Spring on Azure documentation
center.
Spring on Azure
See also
For more information about Spring Data R2DBC, see Spring's reference
documentation .
For more information about using Azure with Java, see Azure for Java developers and
Working with Azure DevOps and Java.
Tutorial: Migrate SQL Server to Azure
SQL Database using DMS (classic)
Article • 03/08/2023
) Important
7 Note
This tutorial uses an older version of the Azure Database Migration Service. For
improved functionality and supportability, consider migrating to Azure SQL
Database by using the Azure SQL migration extension for Azure Data Studio.
You can use Azure Database Migration Service to migrate the databases from a SQL
Server instance to Azure SQL Database. In this tutorial, you migrate the
AdventureWorks2016 database restored to an on-premises instance of SQL Server 2016
(or later) to a single database or pooled database in Azure SQL Database by using Azure
Database Migration Service.
" Assess and evaluate your on-premises database for any blocking issues by using
the Data Migration Assistant.
" Use the Data Migration Assistant to migrate the database sample schema.
" Register the Azure DataMigration resource provider.
" Create an instance of Azure Database Migration Service.
" Create a migration project by using Azure Database Migration Service.
" Run the migration.
" Monitor the migration.
Prerequisites
To complete this tutorial, you need to:
Enable the TCP/IP protocol, which is disabled by default during SQL Server Express
installation, by following the instructions in the article Enable or Disable a Server
Network Protocol.
Create a database in Azure SQL Database, which you do by following the details in
the article Create a database in Azure SQL Database using the Azure portal. For
purposes of this tutorial, the name of the Azure SQL Database is assumed to be
AdventureWorksAzure, but you can provide whatever name you wish.
7 Note
If you use SQL Server Integration Services (SSIS) and want to migrate the
catalog database for your SSIS projects/packages (SSISDB) from SQL Server to
Azure SQL Database, the destination SSISDB will be created and managed
automatically on your behalf when you provision SSIS in Azure Data Factory
(ADF). For more information about migrating SSIS packages, see the article
Migrate SQL Server Integration Services packages to Azure.
Download and install the latest version of the Data Migration Assistant .
Create a Microsoft Azure Virtual Network for Azure Database Migration Service by
using the Azure Resource Manager deployment model, which provides site-to-site
connectivity to your on-premises source servers by using either ExpressRoute or
VPN. For more information about creating a virtual network, see the Virtual
Network Documentation, and especially the quickstart articles with step-by-step
details.
7 Note
During virtual network setup, if you use ExpressRoute with network peering to
Microsoft, add the following service endpoints to the subnet in which the
service will be provisioned:
Target database endpoint (for example, SQL endpoint, Azure Cosmos DB
endpoint, and so on)
Storage endpoint
Service bus endpoint
Ensure that your virtual network Network Security Group outbound security rules
don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and
AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see
the article Filter network traffic with network security groups.
Open your Windows firewall to allow Azure Database Migration Service to access
the source SQL Server, which by default is TCP port 1433. If your default instance is
listening on some other port, add that to the firewall.
If you're running multiple named SQL Server instances using dynamic ports, you
may wish to enable the SQL Browser Service and allow access to UDP port 1434
through your firewalls so that Azure Database Migration Service can connect to a
named instance on your source server.
When using a firewall appliance in front of your source database(s), you may need
to add firewall rules to allow Azure Database Migration Service to access the
source database(s) for migration.
Create a server-level IP firewall rule for Azure SQL Database to allow Azure
Database Migration Service access to the target databases. Provide the subnet
range of the virtual network used for Azure Database Migration Service.
Ensure that the credentials used to connect to source SQL Server instance have
CONTROL SERVER permissions.
Ensure that the credentials used to connect to target Azure SQL Database instance
have CONTROL DATABASE permission on the target databases.
) Important
$readerActions = `
"Microsoft.Network/networkInterfaces/ipConfigurations/read", `
"Microsoft.DataMigration/*/read", `
"Microsoft.Resources/subscriptions/resourceGroups/read"
$writerActions = `
"Microsoft.DataMigration/services/*/write", `
"Microsoft.DataMigration/services/*/delete", `
"Microsoft.DataMigration/services/*/action", `
"Microsoft.Network/virtualNetworks/subnets/join/action", `
"Microsoft.Network/virtualNetworks/write", `
"Microsoft.Network/virtualNetworks/read", `
"Microsoft.Resources/deployments/validate/action", `
"Microsoft.Resources/deployments/*/read", `
"Microsoft.Resources/deployments/*/write"
$writerActions += $readerActions
$subScopes = ,"/subscriptions/00000000-0000-0000-0000-
000000000000/","/subscriptions/11111111-1111-1111-1111-
111111111111/"
function New-DmsReaderRole() {
$aRole =
[Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefi
nition]::new()
$aRole.IsCustom = $true
$aRole.Actions = $readerActions
$aRole.NotActions = @()
$aRole.AssignableScopes = $subScopes
function New-DmsContributorRole() {
$aRole =
[Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefi
nition]::new()
$aRole.IsCustom = $true
$aRole.Actions = $writerActions
$aRole.NotActions = @()
$aRole.AssignableScopes = $subScopes
function Update-DmsReaderRole() {
$aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
$aRole.Actions = $readerActions
$aRole.NotActions = @()
function Update-DmsConributorRole() {
$aRole.Actions = $writerActions
$aRole.NotActions = @()
New-DmsReaderRole
New-DmsContributorRole
Update-DmsReaderRole
Update-DmsConributorRole
2. Specify a project name. From the Assessment type drop-down list, select Database
Engine, in the Source server type text box, select SQL Server, in the Target server
type text box, select Azure SQL Database, and then select Create to create the
project.
When you're assessing the source SQL Server database migrating to a single
database or pooled database in Azure SQL Database, you can choose one or both
of the following assessment report types:
4. On the Select sources screen, in the Connect to a server dialog box, provide the
connection details to your SQL Server, and then select Connect.
5. In the Add sources dialog box, select AdventureWorks2016, select Add, and then
select Start Assessment.
7 Note
If you use SSIS, DMA does not currently support the assessment of the source
SSISDB. However, SSIS projects/packages will be assessed/validated as they
are redeployed to the destination SSISDB hosted by Azure SQL Database. For
more information about migrating SSIS packages, see the article Migrate SQL
Server Integration Services packages to Azure.
When the assessment is complete, the results display as shown in the following
graphic:
For databases in Azure SQL Database, the assessments identify feature parity
issues and migration blocking issues for deploying to a single database or pooled
database.
6. Review the assessment results for migration blocking issues and feature parity
issues by selecting the specific options.
7 Note
Before you create a migration project in Data Migration Assistant, be sure that you
have already provisioned a database in Azure as mentioned in the prerequisites.
) Important
If you use SSIS, DMA does not currently support the migration of source SSISDB,
but you can redeploy your SSIS projects/packages to the destination SSISDB hosted
by Azure SQL Database. For more information about migrating SSIS packages, see
the article Migrate SQL Server Integration Services packages to Azure.
1. In the Data Migration Assistant, select the New (+) icon, and then under Project
type, select Migration.
2. Specify a project name, in the Source server type text box, select SQL Server, and
then in the Target server type text box, select Azure SQL Database.
After performing the previous steps, the Data Migration Assistant interface should
appear as shown in the following graphic:
5. In the Data Migration Assistant, specify the source connection details for your SQL
Server, select Connect, and then select the AdventureWorks2016 database.
6. Select Next, under Connect to target server, specify the target connection details
for the Azure SQL Database, select Connect, and then select the
AdventureWorksAzure database you had pre-provisioned in Azure SQL Database.
7. Select Next to advance to the Select objects screen, on which you can specify the
schema objects in the AdventureWorks2016 database that need to be deployed to
Azure SQL Database.
9. Select Deploy schema to deploy the schema to Azure SQL Database, and then
after the schema is deployed, check the target server for any anomalies.
Register the resource provider
Register the Microsoft.DataMigration resource provider before you create your first
instance of the Database Migration Service.
2. Select the subscription in which you want to create the instance of Azure Database
Migration Service, and then select Resource providers.
3. Search for migration, and then select Register for Microsoft.DataMigration.
Select the appropriate Source server type and Target server type, and choose the
Database Migration Service (Classic) option.
3. On the Create Migration Service basics screen:
Select an existing virtual network or create a new one. The virtual network
provides Azure Database Migration Service with access to the source server
and the target instance. For more information about how to create a virtual
network in the Azure portal, see the article Create a virtual network using the
Azure portal.
Select Review + Create to review the details and then select Create to create
the service.
After a few moments, your instance of the Azure Database Migration service
is created and ready to use:
2. On the Azure Database Migration Services screen, select the Azure Database
Migration Service instance that you created.
4. On the New migration project screen, specify a name for the project, in the
Source server type text box, select SQL Server, in the Target server type text box,
select Azure SQL Database, and then for Choose Migration activity type, select
Data migration.
5. Select Create and run activity to create the project and run the migration activity.
Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server
instance name. You can also use the IP Address for situations in which DNS name
resolution isn't possible.
2. If you have not installed a trusted certificate on your source server, select the Trust
server certificate check box.
) Important
If you use SSIS, DMS does not currently support the migration of source
SSISDB, but you can redeploy your SSIS projects/packages to the destination
SSISDB hosted by Azure SQL Database. For more information about migrating
SSIS packages, see the article Migrate SQL Server Integration Services
packages to Azure.
1. Choose the database(s) you want to migrate from the list of available databases.
2. Review the expected downtime. If it's acceptable, select Next: Select target >>
2. Select Next: Map to target databases screen, map the source and the target
database for migration.
If the target database contains the same database name as the source database,
Azure Database Migration Service selects the target database by default.
3. Select Next: Configuration migration settings, expand the table listing, and then
review the list of affected fields.
Azure Database Migration Service auto selects all the empty source tables that
exist on the target Azure SQL Database instance. If you want to remigrate tables
that already include data, you need to explicitly select the tables on this blade.
4. Select Next: Summary, review the migration configuration and in the Activity
name text box, specify a name for the migration activity.
Run the migration
Select Start migration.
The migration activity window appears, and the Status of the activity is Pending.
Additional resources
For information about Azure Database Migration Service, see the article What is
Azure Database Migration Service?.
For information about Azure SQL Database, see the article What is the Azure SQL
Database service?.
SQL Database Projects extension
Article • 04/13/2023
The SQL Database Projects extension is an Azure Data Studio and Visual Studio Code
extension for developing SQL databases in a project-based development environment.
Compatible databases include SQL Server, Azure SQL Database, Azure SQL Managed
Instance, and Azure Synapse SQL. A SQL project is a local representation of SQL objects
that comprise the schema for a single database, such as tables, stored procedures, or
functions. When a SQL Database project is built, the output artifact is a .dacpac file. New
and existing databases can be updated to match the contents of the .dacpac by
publishing the SQL Database project with the SQL Database Projects extension or by
publishing the .dacpac with the command line interface SqlPackage.
Extension features
The SQL Database Projects extension provides the following features:
The following features in the SQL Database Projects extension are currently in preview:
Watch this short 10-minute video for an introduction to the SQL Database Projects
extension in Azure Data Studio:
https://channel9.msdn.com/Shows/Data-Exposed/Build-SQL-Database-Projects-Easily-
in-Azure-Data-Studio/player?WT.mc_id=dataexposed-c9-
niner&nocookie=true&locale=en-us&embedUrl=%2Fsql%2Fazure-data-
studio%2Fextensions%2Fsql-database-project-extension
Install
You can install the SQL Database Project extension in Azure Data Studio and Visual
Studio Code.
1. Open the extensions manager to access the available extensions. To do so, either
select the extensions icon or select Extensions in the View menu.
2. Identify the SQL Database Projects extension by typing all or part of the name in
the extension search box. Select an available extension to view its details.
3. Select the extension you want and choose to Install it.
4. Select Reload to enable the extension (only required the first time you install an
extension).
7 Note
Dependencies
The SQL Database Projects extension has a dependency on the .NET SDK (required) and
AutoRest.Sql (optional).
.NET SDK
The .NET SDK is required for project build functionality and you are prompted to install
the .NET SDK if a supported version can't be detected by the extension. The .NET SDK
can be downloaded and installed for Windows, macOS, and Linux .
If you would like to check currently installed versions of the dotnet SDK, open a terminal
and run the following command:
.NET CLI
dotnet --list-sdks
After installing the .NET SDK, your environment is ready to use the SQL Database
Projects extension.
Common issues
Nuget.org missing from the list of sources may result in error messages such as:
be found.
Unable to find package Microsoft.Build.Sql. No packages exist with this id in
To check if nuget.org is registered as a source, run dotnet nuget list source from the
command line and review the results for an [Enabled] item referencing nuget.org. If
nuget.org is not registered as a source, run dotnet nuget add source
https://api.nuget.org/v3/index.json -n nuget.org .
Unsupported .NET SDK versions may result in error messages such as:
To force the SQL Database Projects extension to use the v6.x version of the .NET SDK
when multiple versions are installed, add a global.json file to the folder that contains the
SQL project.
AutoRest.Sql
The SQL extension for AutoRest is automatically downloaded and used by the SQL
Database Projects extension when a SQL project is generated from an OpenAPI
specification file.
Limitations
Currently, the SQL Database Project extension has the following limitations:
Workspace
SQL database projects are contained within a logical workspace in Azure Data Studio
and Visual Studio Code. A workspace manages the folder(s) visible in the Explorer pane.
All SQL projects within the folders open in the current workspace are available in the
SQL Database Projects view by default.
You can manually add and remove projects from a workspace through the interface in
the Projects pane. The settings for a workspace can be manually edited in the .code-
workspace file, if necessary.
In the following example .code-workspace file, the folders array lists all folders included
in the Explorer pane and the dataworkspace.excludedProjects array within settings lists
all the SQL projects excluded from the Projects pane.
JSON
"folders": [
"path": "."
},
"name": "WideWorldImportersDW",
"path": "..\\WideWorldImportersDW"
],
"settings": {
"dataworkspace.excludedProjects": [
"AdventureWorksLT.sqlproj"
Next steps
Getting Started with the SQL Database Projects extension
Build and Publish a project with SQL Database Projects extension
SQL Server extension for Visual Studio
Code
Article • 04/03/2023
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
This article shows how to use the mssql extension for Visual Studio Code (Visual Studio
Code) to work with databases in SQL Server on Windows, macOS, and Linux, as well as
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. The
mssql extension for Visual Studio Code lets you connect to a SQL Server, query with
Transact-SQL (T-SQL), and view the results.
1. Select File > New File or press Ctrl+N. Visual Studio Code opens a new Plain Text
file by default.
2. Select Plain Text on the lower status bar, or press Ctrl+K > M, and select SQL from
the languages dropdown.
7 Note
If this is the first time you have used the extension, the extension installs the
SQL Tools Service in the background.
If you open an existing file that has a .sql file extension, the language mode is
automatically set to SQL.
7 Note
A SQL file, such as the empty SQL file you created, must have focus in the
code editor before you can execute the mssql commands.
4. Then select Create to create a new connection profile for your SQL Server.
5. Follow the prompts to specify the properties for the new connection profile. After
specifying each value, press Enter to continue.
Connection Description
property
Server name Specify the SQL Server instance name. Use localhost to connect to a SQL
or ADO Server instance on your local machine. To connect to a remote SQL Server,
connection enter the name of the target SQL Server, or its IP address. To connect to a
string SQL Server container, specify the IP address of the container's host
machine. If you need to specify a port, use a comma to separate it from
the name. For example, for a server listening on port 1401, enter
<servername or IP>,1401 .
As an alternative, you can enter the ADO connection string for your
database here.
Connection Description
property
Database The database that you want to use. To connect to the default database,
name don't specify a database name here.
(optional)
User name If you selected SQL Login, enter the name of a user with access to a
database on the server.
Save Password Press Enter to select Yes and save the password. Select No to be
prompted for the password each time the connection profile is used.
Profile Name Type a name for the connection profile, such as localhost profile.
(optional)
After you enter all values and select Enter, Visual Studio Code creates the
connection profile and connects to the SQL Server.
Tip
If the connection fails, try to diagnose the problem from the error message in
the Output panel in Visual Studio Code. To open the Output panel, select
View > Output. Also review the connection troubleshooting
recommendations.
As an alternative to the previous steps, you can also create and edit connection profiles
in the User Settings file (settings.json). To open the settings file, select File > Preferences
> Settings. For more information, see Manage connection profiles .
For users connecting to Azure SQL Database, no changes to existing, saved connections
are needed; Azure SQL Database supports encrypted connections and is configured with
trusted certificates.
For users connecting to on-premises SQL Server, or SQL Server in a Virtual Machine, if
Encrypt is set to True, ensure that you have a certificate from a trusted certificate
authority (e.g. not a self-signed certificate). Alternatively, you may choose to connect
without encryption (Encrypt set to False), or to trust the server certificate (Encrypt set to
True and Trust server certificate set to True).
Create a database
1. In the new SQL file that you started earlier, type sql to display a list of editable
code snippets.
2. Select sqlCreateDatabase.
SQL
USE master
GO
IF NOT EXISTS (
SELECT name
FROM sys.databases
GO
4. Press Ctrl+Shift+E to execute the Transact-SQL commands. View the results in the
query window.
Tip
You can customize the shortcut keys for the mssql commands. See Customize
shortcuts .
Create a table
1. Delete the contents of the code editor window.
3. Type sql to display the mssql commands, or type sqluse, and then select the MS
SQL: Use Database command.
5. In the code editor, type sql to display the snippets, select sqlCreateTable, and then
press Enter.
7. Press Tab to get to the next field, and then type dbo for the schema name.
SQL
SQL
-- Insert rows into table 'Employees'
([EmployeesId],[Name],[Location])
VALUES
( 1, N'Jared', N'Australia'),
( 2, N'Nikita', N'India'),
( 3, N'Tom', N'Germany'),
GO
FROM dbo.Employees as e
GO
While you type, T-SQL IntelliSense helps you to complete the statements:
Tip
The mssql extension also has commands to help create INSERT and SELECT
statements. These were not used in the previous example.
2. Press Ctrl+Shift+E to execute the commands. The two result sets display in the
Results window.
2. Select the Results and Messages panel headers to collapse and expand the panels.
Tip
You can customize the default behavior of the mssql extension. See
Customize extension options .
3. Select the maximize grid icon on the second result grid to zoom in to those results.
7 Note
The maximize icon displays when your T-SQL script produces two or more
result grids.
6. Open the grid context menu again and select Save as JSON to save the result to a
.json file.
8. Verify that the JSON file saves and opens in Visual Studio Code.
If you need to save and run SQL scripts later, for administration or a larger development
project, save the scripts with a .sql extension.
Next steps
If you're new to T-SQL, see Tutorial: Write Transact-SQL statements and the
Transact-SQL Reference (Database Engine).
Develop for SQL databases in Visual Studio Code with the SQL Database Projects
extension
For more information on using or contributing to the mssql extension, see the
mssql extension project wiki .
For more information on using Visual Studio Code, see the Visual Studio Code
documentation .
Always Encrypted with secure enclaves
documentation
Find documentation about Always Encrypted with secure enclaves
Overview
e OVERVIEW
p CONCEPT
Enable Always Encrypted with secure enclaves for your Azure SQL Database
p CONCEPT
g TUTORIAL
Develop a .NET Framework application using Always Encrypted with secure enclaves
Manage keys
c HOW-TO GUIDE
Configure columns
c HOW-TO GUIDE
Configure column encryption in-place using Always Encrypted with secure enclaves
Configure column encryption in-place with the Always Encrypted wizard in SSMS
Enable Always Encrypted with secure enclaves for existing encrypted columns
Videos
q VIDEO
Inside Azure Datacenter Architecture with Mark Russinovich
Query columns
c HOW-TO GUIDE
Create and use indexes on columns using Always Encrypted with secure enclaves
Develop applications
c HOW-TO GUIDE
Applies to:
SQL Server 2019 (15.x) and later - Windows only
Azure SQL
Database
Always Encrypted with secure enclaves extends the existing Always Encrypted feature to
enable richer functionality on sensitive data while keeping the data confidential. This
article lists common tasks for configuring and using the feature.
For tutorials that show you how to quickly get started with Always Encrypted with secure
enclaves, see:
The process for setting up your environment depends on whether you're using SQL
Server 2019 (15.x) or Azure SQL Database.
Plan for Always Encrypted with secure enclaves in SQL Server without attestation
Configure the secure enclave in SQL Server
) Important
VBS enclaves in Azure SQL Database (in preview) currently do not support
attestation. Configuring Azure Attestation only applies to Intel SGX enclaves.
See also
Getting started using Always Encrypted with secure enclaves
Ledger overview
Article • 05/23/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
7 Note
Establishing trust around the integrity of data stored in database systems has been a
longstanding problem for all organizations that manage financial, medical, or other
sensitive data. The ledger feature provides tamper-evidence capabilities in your
database. You can cryptographically attest to other parties, such as auditors or other
business parties, that your data hasn't been tampered with.
Ledger helps protect data from any attacker or high-privileged user, including database
administrators (DBAs), system administrators, and cloud administrators. As with a
traditional ledger, the feature preserves historical data. If a row is updated in the
database, its previous value is maintained and protected in a history table. Ledger
provides a chronicle of all changes made to the database over time.
Ledger and the historical data are managed transparently, offering protection without
any application changes. The feature maintains historical data in a relational form to
support SQL queries for auditing, forensics, and other purposes. It provides guarantees
of cryptographic data integrity while maintaining the power, flexibility, and performance
of the SQL database.
Use cases for ledger
Let's go over some advantages for using ledger.
Streamlining audits
Any production system's value is based on the ability to trust the data that the system is
consuming and producing. If a malicious user has tampered with the data in your
database, that can have disastrous results in the business processes relying on that data.
Maintaining trust in your data requires a combination of enabling the proper security
controls to reduce potential attacks, backup and restore practices, and thorough
disaster recovery procedures. Audits by external parties ensure that these practices are
put in place.
Audit processes are highly time-intensive activities. Auditing requires on-site inspection
of implemented practices such as reviewing audit logs, inspecting authentication, and
inspecting access controls. Although these manual processes can expose potential gaps
in security, they can't provide attestable proof that the data hasn't been maliciously
altered.
Ledger provides the cryptographic proof of data integrity to auditors. This proof can
help streamline the auditing process. It also provides nonrepudiation regarding the
integrity of the system's data.
Blockchain is a great solution for multiple-party networks where trust is low between
parties that participate on the network. Many of these networks are fundamentally
centralized solutions where trust is important, but a fully decentralized infrastructure is a
heavyweight solution.
Ledger provides a solution for these networks. Participants can verify the integrity of the
centrally housed data, without the complexity and performance implications that
network consensus introduces in a blockchain network.
Customer success
Learn how Lenovo is reinforcing customer trust using ledger in Azure SQL
Database by watching this video .
RTGS.global using ledger in Azure SQL Database to establish trust with banks
around the world .
Qode Health Solutions secures COVID-19 vaccination records with the ledger
feature in Azure SQL Database
How it works
Any rows modified by a transaction in a ledger table is cryptographically SHA-256
hashed using a Merkle tree data structure that creates a root hash representing all rows
in the transaction. The transactions that the database processes are then also SHA-256
hashed together through a Merkle tree data structure. The result is a root hash that
forms a block. The block is then SHA-256 hashed through the root hash of the block,
along with the root hash of the previous block as input to the hash function. That
hashing forms a blockchain.
The root hashes in the database ledger, also called Database digests, contain the
cryptographically hashed transactions and represent the state of the database. They can
be periodically generated and stored outside the database in tamper-proof storage,
such as Azure Blob Storage configured with immutability policies, Azure Confidential
Ledger or on-premises Write Once Read Many (WORM) storage devices . Database
digests are later used to verify the integrity of the database by comparing the value of
the hash in the digest against the calculated hashes in database.
Updatable ledger tables, which allow you to update and delete rows in your tables.
Append-only ledger tables, which only allow insertions to your tables.
Both updatable ledger tables and append-only ledger tables provide tamper-evidence
and digital forensics capabilities.
Updatable ledger tables track the history of changes to any rows in your database when
transactions that perform updates or deletions occur. An updatable ledger table is a
system-versioned table that contains a reference to another table with a mirrored
schema.
The other table is called the history table. The system uses this table to automatically
store the previous version of the row each time a row in the ledger table is updated or
deleted. The history table is automatically created when you create an updatable ledger
table.
The values in the updatable ledger table and its corresponding history table provide a
chronicle of the values of your database over time. A system-generated ledger view
joins the updatable ledger table and the history table so that you can easily query this
chronicle of your database.
For more information on updatable ledger tables, see Create and use updatable ledger
tables.
Because only insertions are allowed into the system, append-only ledger tables don't
have a corresponding history table because there's no history to capture. As with
updatable ledger tables, a ledger view provides insights into the transaction that
inserted rows into the append-only table, and the user that performed the insertion.
For more information on append-only ledger tables, see Create and use append-only
ledger tables.
Ledger database
Ledger databases provide an easy solution for applications that require the integrity of
all data to be protected for the entire lifetime of the database. A ledger database can
only contain ledger tables. Creating regular tables (that are not ledger tables) is not
supported. Each table is, by default, created as an Updatable ledger table with default
settings, which makes creating such tables very easy. You configure a database as a
ledger database at creation. Once created, a ledger database cannot be converted to a
regular database. For more information, see Configure a ledger database.
Database digests
The hash of the latest block in the database ledger is called the database digest. It
represents the state of all ledger tables in the database at the time that the block was
generated.
When a block is formed, its associated database digest is published and stored outside
the database in tamper-proof storage. Because database digests represent the state of
the database at the time that they were generated, protecting the digests from
tampering is paramount. An attacker who has access to modify the digests would be
able to:
Ledger provides the ability to automatically generate and store the database digests in
immutable storage or Azure Confidential Ledger, to prevent tampering. Alternatively,
users can manually generate database digests and store them in the location of their
choice. Database digests are used for later verifying that the data stored in ledger tables
hasn't been tampered with.
Ledger verification
The ledger feature doesn't allow modifying the content of ledger system views, append-
only tables and history tables. However, an attacker or system administrator who has
control of the machine can bypass all system checks and directly tamper with the data.
For example, an attacker or system administrator can edit the database files in storage.
Ledger can't prevent such attacks but guarantees that any tampering will be detected
when the ledger data is verified.
The ledger verification process takes as input one or more previously generated
database digests and recomputes the hashes stored in the database ledger based on
the current state of the ledger tables. If the computed hashes don't match the input
digests, the verification fails, indicating that the data has been tampered with. Ledger
then reports all inconsistencies that it has detected.
Next steps
What is the database ledger
Create and use append-only ledger tables
Create and use updatable ledger tables
Enable automatic digest storage
Configure a ledger database
Verify a ledger table to detect tampering
See also
Bringing the power of blockchain to Azure SQL Database and SQL Server with
ledger | Data Exposed
What is the database ledger?
Article • 05/23/2023
Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance
The database ledger is part of the ledger feature. The database ledger incrementally
captures the state of a database as the database evolves over time, while updates occur
on ledger tables. It logically uses a blockchain and Merkle tree data structures.
Any operations that update a ledger table need to perform some additional tasks to
maintain the historical data and compute the digests captured in the database ledger.
Specifically, for every row updated, we must:
Ledger achieves that by extending the Data Manipulation Language (DML) query plans
of all insert, update and delete operations targeting ledger tables. The transaction ID
and newly generated sequence number are set for the new version of the row. Then, the
query plan operator executes a special expression that serializes the row content and
computes its hash, appending it to a Merkle Tree that is stored at the transaction level
and contains the hashes of all row versions updated by this transaction for this ledger
table. The root of the tree represents all the updates and deletes performed by this
transaction in this ledger table. If the transaction updates multiple tables, a separate
Merkle Tree is maintained for each table. The figure below shows an example of a
Merkle Tree storing the updated row versions of a ledger table and the format used to
serialize the rows. Other than the serialized value of each column, we include metadata
regarding the number of columns in the row, the ordinal of individual columns, the data
types, lengths and other information that affects how the values are interpreted.
To capture the state of the database, the database ledger stores an entry for every
transaction. It captures metadata about the transaction, such as its commit timestamp
and the identity of the user who executed it. It also captures the Merkle tree root of the
rows updated in each ledger table (see above). These entries are then appended to a
tamper-evident data structure to allow verification of integrity in the future. A block is
closed:
When a block is closed, new transactions will be inserted in a new block. The block
generation process then:
1. Retrieves all transactions that belong to the closed block from both the in-memory
queue and the sys.database_ledger_transactions system catalog view.
2. Computes the Merkle tree root over these transactions and the hash of the
previous block.
3. Persists the closed block in the sys.database_ledger_blocks system catalog view.
Because this is a regular table update, the system automatically guarantees its durability.
To maintain the single chain of blocks, this operation is single-threaded. But it's also
efficient, because it only computes the hashes over the transaction information and
happens asynchronously. It doesn't affect the transaction performance.
For more information on how ledger provides data integrity, see the articles, Digest
management and Database verification.
SQL
The following example of a ledger table consists of four transactions that made up one
block in the blockchain of the database ledger:
Permissions
Viewing the database ledger requires the VIEW LEDGER CONTENT permission. For details
on permissions related to ledger tables, see Permissions.
See also
Ledger overview
Data Manipulation Language (DML)
Ledger views
Updatable ledger tables
Article • 05/23/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
Updatable ledger tables are system-versioned tables on which users can perform
updates and deletes while also providing tamper-evidence capabilities. When updates
or deletes occur, all earlier versions of a row are preserved in a secondary table, known
as the history table. The history table mirrors the schema of the updatable ledger table.
When a row is updated, the latest version of the row remains in the ledger table, while
its earlier version is inserted into the history table by the system, transparently to the
application.
Both updatable ledger tables and temporal tables are system-versioned tables, for
which the database engine captures historical row versions in secondary history tables.
Either technology provides unique benefits. Updatable ledger tables make both the
current and historical data tamper evident. Temporal tables support querying the data
stored at any point in time instead of only the data that's correct at the current moment
in time. You can use both technologies together by creating tables that are both
updatable ledger tables and temporal tables.
You can create an updatable ledger table by specifying the LEDGER = ON argument in
your CREATE DATABASE (Transact-SQL) statement.
Tip
For information on options available when you specify the LEDGER argument in your T-
SQL statement, see CREATE TABLE (Transact-SQL).
) Important
After a ledger table is created, it can't be reverted to a table that isn't a ledger
table. As a result, an attacker can't temporarily remove ledger capabilities on a
ledger table, make changes, and then reenable ledger functionality.
Updatable ledger table schema
An updatable ledger table needs to have the following GENERATED ALWAYS columns
that contain metadata noting which transactions made changes to the table and the
order of operations by which rows were updated by the transaction. This data is useful
for forensics purposes in understanding how data was inserted over time.
If you don't specify the required GENERATED ALWAYS columns of the ledger table and
ledger history table in the CREATE TABLE (Transact-SQL) statement, the system
automatically adds the columns and uses the following default names. For more
information, see examples in Creating an updatable ledger table.
History table
The history table is automatically created when an updatable ledger table is created. The
history table captures the historical values of rows changed because of updates and
deletes in the updatable ledger table. The schema of the history table mirrors that of the
updatable ledger table it's associated with.
When you create an updatable ledger table, you can either specify the name of the
schema to contain your history table and the name of the history table or you have the
system generate the name of the history table and add it to the same schema as the
ledger table. History tables with system-generated names are called anonymous history
tables. The naming convention for an anonymous history table is <schema> .
<updatableledgertablename> .MSSQL_LedgerHistoryFor_ <GUID> .
Ledger view
For every updatable ledger table, the system automatically generates a view, called the
ledger view. The ledger view is a join of the updatable ledger table and its associated
history table. The ledger view reports all row modifications that have occurred on the
updatable ledger table by joining the historical data in the history table. This view
enables users, their partners, or auditors to analyze all historical operations and detect
potential tampering. Each row operation is accompanied by the ID of the acting
transaction, along with whether the operation was a DELETE or an INSERT . Users can
retrieve more information about the time the transaction was executed and the identity
of the user who executed it and correlate it to other operations performed by this
transaction.
For example, if you want to track transaction history for a banking scenario, the ledger
view provides a chronicle of transactions over time. By using the ledger view, you don't
have to independently view the updatable ledger table and history tables or construct
your own view to do so.
For an example of using the ledger view, see Create and use updatable ledger tables.
The ledger view's schema mirrors the columns defined in the updatable ledger and
history table, but the GENERATED ALWAYS columns are different than those of the
updatable ledger and history tables.
7 Note
The ledger view column names can be customized when you create the table by
using the <ledger_view_option> parameter with the CREATE TABLE (Transact-SQL)
statement. For more information, see ledger view options and the corresponding
examples in CREATE TABLE (Transact-SQL).
Next steps
Create and use updatable ledger tables
Create and use append-only ledger tables
How to migrate data from regular tables to ledger tables
Append-only ledger tables
Article • 02/28/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
Append-only ledger tables allow only INSERT operations on your tables, which ensure
that privileged users such as database administrators can't alter data through traditional
Data Manipulation Language operations. Append-only ledger tables are ideal for
systems that don't update or delete records, such as security information event and
management systems or blockchain systems where data needs to be replicated from the
blockchain to a database. Because there are no UPDATE or DELETE operations on an
append-only table, there's no need for a corresponding history table as there is with
updatable ledger tables.
You can create an append-only ledger table by specifying the LEDGER = ON argument in
your CREATE TABLE (Transact-SQL) statement and specifying the APPEND_ONLY = ON
option.
) Important
After a table is created as a ledger table, it can't be reverted to a table that doesn't
have ledger functionality. As a result, an attacker can't temporarily remove ledger
capabilities, make changes to the table, and then reenable ledger functionality.
If you don't specify the definitions of the GENERATED ALWAYS columns in the CREATE
TABLE statement, the system automatically adds them by using the following default
names.
Ledger view
For every append-only ledger table, the system automatically generates a view, called
the ledger view. The ledger view reports all row inserts that have occurred on the table.
The ledger view is primarily helpful for updatable ledger tables, rather than append-only
ledger tables, because append-only ledger tables don't have any UPDATE or DELETE
capabilities. The ledger view for append-only ledger tables is available for consistency
between both updatable and append-only ledger tables.
Ledger view schema
7 Note
The ledger view column names can be customized when you create the table by
using the <ledger_view_option> parameter with the CREATE TABLE (Transact-SQL)
statement. For more information, see ledger view options and the corresponding
examples in CREATE TABLE (Transact-SQL).
Next steps
Create and use append-only ledger tables
Create and use updatable ledger tables
How to migrate data from regular tables to ledger tables
Digest management
Article • 05/23/2023
Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance
Database digests
The hash of the latest block in the database ledger is called the database digest. It
represents the state of all ledger tables in the database at the time when the block was
generated. Generating a database digest is efficient, because it involves computing only
the hashes of the blocks that were recently appended.
Database digests are generated in the form of a JSON document that contains the hash
of the latest block, together with metadata for the block ID. The metadata includes the
time that the digest was generated and the commit time stamp of the last transaction in
this block.
The verification process and the integrity of the database depend on the integrity of the
input digests. For this purpose, database digests that are extracted from the database
need to be stored in trusted storage that the high-privileged users or attackers of the
database can't tamper with.
7 Note
Automatic generation and storage of database digests in SQL Server only supports
Azure Storage accounts.
Ledger integrates with the immutable storage feature of Azure Blob Storage and Azure
Confidential Ledger. This integration provides secure storage services in Azure to help
protect the database digests from potential tampering. This integration provides a
simple and cost-effective way for users to automate digest management without having
to worry about their availability and geographic replication. Azure Confidential Ledger
has a stronger integrity guarantee for customers who might be concerned about
privileged administrators access to the digest. This table compares the immutable
storage feature of Azure Blob Storage with Azure Confidential Ledger.
You can configure automatic generation and storage of database digests through the
Azure portal, PowerShell, or the Azure CLI. For more information, see Enable automatic
digest storage. When you configure automatic generation and storage, database digests
are generated on a predefined interval of 30 seconds and uploaded to the selected
storage service. If no transactions occur on the system in the 30-second interval, a
database digest won't be generated and uploaded. This mechanism ensures that
database digests are generated only when data has been updated in your database.
When the endpoint is an Azure Blob Storage, the logical server for Azure SQL Database
or Azure SQL Managed Instance creates a new container, named sqldbledgerdigests
and uses a naming pattern like: ServerName/DatabaseName/CreationTime . The creation
time is needed because a database with the same name can be dropped and recreated
or restored, allowing for different incarnations of the database under the same name.
For more information, see Digest Management Considerations.
7 Note
For SQL Server, the container needs to be created manually by the user.
If you use an Azure Storage account for the storage of the database digests, configure
an immutability policy on your container after provisioning to ensure that database
digests are protected from tampering. Make sure the immutability policy allows
protected append writes to append blobs and that the policy is locked.
If you use SQL Server, you have to create a shared access signature (SAS) on the digest
container to allow SQL Server to connect and authenticate against the Azure Storage
account.
The following example assumes that an Azure Storage container, a policy, and a SAS key
have been created. This is needed by SQL Server to access the digest files in the
container.
In the following code snippet, replace <your SAS key> with the SAS key. The SAS key
looks like 'sr=c&si=<MYPOLICYNAME>&sig=<THESHAREDACCESSSIGNATURE>' .
SQL
CREATE CREDENTIAL
[https://ledgerstorage.blob.core.windows.net/sqldbledgerdigests]
WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = '<your SAS key>'
If you use Azure SQL Database or Azure SQL Managed Instance, make sure that your
logical server or managed instance (System Identity) has sufficient permissions to write
digests by adding it to the Contributor role. To do this, follow the steps for Azure
Confidential Ledger user management.
7 Note
Automatic generation and storage of database digests in SQL Server only supports
Azure Storage accounts.
SQL
EXECUTE sp_generate_database_ledger_digest;
The returned result set is a single row of data. It should be saved to the trusted storage
location as a JSON document as follows:
JSON
{
"database_name": "ledgerdb",
"block_id": 0,
"hash":
"0xDC160697D823C51377F97020796486A59047EBDBF77C3E8F94EEE0FFF7B38A6A",
"last_transaction_commit_time": "2020-11-12T18:01:56.6200000",
"digest_time": "2020-11-12T18:39:27.7385724"
}
Permissions
Generating database digests requires the GENERATE LEDGER DIGEST permission. For
details on permissions related to ledger tables, see Permissions.
Database restore
Restoring the database back to an earlier point in time, also known as Point in Time
Restore, is an operation frequently used when a mistake occurs and users need to
quickly revert the state of the database back to an earlier point in time. When uploading
the generated digests to Azure Storage or Azure Confidential Ledger, the create time of
the database is captured that these digests map to. Every time the database is restored,
it's tagged with a new create time and this technique allows us to store the digests
across different "incarnations" of the database. For SQL Server, the create time is the
current UTC time when the digest upload is enabled for the first time. Ledger preserves
the information regarding when a restore operation occurred, allowing the verification
process to use all the relevant digests across the various incarnations of the database.
Additionally, users can inspect all digests for different create times to identify when the
database was restored and how far back it was restored to. Since this data is written in
immutable storage, this information will be protected as well.
7 Note
Ledger in Azure SQL Managed Instance is currently in public preview. If you
perform a native restore of a database backup, you need to change the digest path
manually using the Azure Portal, PowerShell or the Azure CLI.
If failover group is deleted or you drop the link, both databases will behave as primary
databases. At that point the digest path of the previous secondary database will change
and we will add a folder RemovedSecondaryReplica to the path.
When your database is part of an Always On availability group in SQL Server, the same
principle as active geo-replication is used. The upload of the digests is only done if all
transactions have been replicated to the secondary replicas.
7 Note
Ledger in Azure SQL Managed Instance is currently in public preview. The Managed
Instance link feature is not supported.
Next steps
Ledger overview
Enable automatic digest storage
sys.sp_generate_database_ledger_digest
Database verification
Article • 05/24/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
Ledger provides a form of data integrity called forward integrity, which provides
evidence of data tampering on data in your ledger tables. The database verification
process takes as input one or more previously generated database digests. It then
recomputes the hashes stored in the database ledger based on the current state of the
ledger tables. If the computed hashes don't match the input digests, the verification
fails. The failure indicates that the data has been tampered with. The verification process
reports all inconsistencies that it detects.
Because the ledger verification recomputes all of the hashes for transactions in the
database, it can be a resource-intensive process for databases with large amounts of
data. To reduce the cost of verification, the feature exposes options to verify individual
ledger tables or only a subset of the ledger tables.
7 Note
When you use automatic digest storage, you can change storage locations throughout
the lifecycle of the ledger tables. For example, if you start by using Azure immutable
storage to store your digest files, but later you want to use Azure Confidential Ledger
instead, you can do so. This change in location is stored in
sys.database_ledger_digest_locations.
When you run ledger verification, inspect the location of digest_locations to ensure
digests used in verification are retrieved from the locations you expect. You want to
make sure that a privileged user hasn't changed locations of the digest storage to an
unprotected storage location, such as Azure Storage, without a configured and locked
immutability policy.
To simplify running verification when you use multiple digest storage locations, the
following script will fetch the locations of the digests and execute verification by using
those locations.
SQL
BEGIN TRY
EXEC sys.sp_verify_database_ledger_from_digest_storage
@digest_locations;
END TRY
BEGIN CATCH
THROW;
END CATCH
SQL
EXECUTE sp_verify_database_ledger N'
"database_name": "ledgerdb",
"block_id": 0,
"hash":
"0xDC160697D823C51377F97020796486A59047EBDBF77C3E8F94EEE0FFF7B38A6A",
"last_transaction_commit_time": "2020-11-12T18:01:56.6200000",
"digest_time": "2020-11-12T18:39:27.7385724"
},
"database_name": "ledgerdb",
"block_id": 1,
"hash":
"0xE5BE97FDFFA4A16ADF7301C8B2BEBC4BAE5895CD76785D699B815ED2653D9EF8",
"last_transaction_commit_time": "2020-11-12T18:39:35.6633333",
"digest_time": "2020-11-12T18:43:30.4701575"
]';
Recommendation
Ideally, you want to minimize or even eliminate the gap between the time the attack
occurred and the time it was detected. Microsoft recommends scheduling the ledger
verification] regularly to avoid a restore of the database from days or months ago after
tampering was detected. The interval of the verification should be decided by the
customer, but be aware that ledger verification can be resource consuming. We
recommend running this during a maintenance window or off peak hours.
Scheduling database verification in Azure SQL Database can be done with Elastic Jobs or
Azure Automation. For scheduling the database verification in Azure SQL Managed
Instance and SQL Server, you can use SQL Server Agent.
7 Note
Permissions
Database verification requires the VIEW LEDGER CONTENT permission. For details on
permissions related to ledger tables, see Permissions.
Next steps
Ledger overview
Verify a ledger table to detect tampering
sys.database_ledger_digest_locations
sp_verify_database_ledger_from_digest_storage
sp_verify_database_ledger
Monitor digest uploads
Article • 05/23/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
You can monitor failed and successful ledger digest uploads in the Azure portal in the
Metrics view of your Azure SQL Database.
Next steps
Ledger overview
Enable automatic digest storage
Recover ledger database after
tampering
Article • 05/24/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
Tampering categories
Depending on the type of tampering, there are cases where you can repair the ledger
without losing data. You should consider two categories of tampering events.
Since the tampering didn't affect any transactions that occurred after the tampering
event, the new transaction execution and generated results are correct. Based on that,
you should ideally bring the ledger to a consistent state without affecting these
transactions.
If the attacker didn't tamper with the database level ledger, this is easy to detect and
repair. The database ledger is in a consistent state with all database digests generated,
and any new transactions appended to it have been hashed using the valid hashes of
earlier transactions. Based on that, any database digests that were generated, even for
transactions after the tampering occurred, are still valid. You can attempt to retrieve the
correct table ledger payload for the tampered transactions from backups that can still
be validated to be secure (using the ledger validation on them) and repair the
operational database by overwriting the tampered data in the table ledger. This will
create a new transaction recording the repairing transactions.
If the attacker tampered with the database ledger, recomputing the hashes of blocks to
make it internally consistent (until verified against external database digests), then new
transactions and database digests will be generated over invalid hashes. This leads to a
fork in the ledger, since the new database digests generated map to an invalid state and
even if you repair the ledger by using earlier backups, all these database digests are
permanently invalid. Additionally, since the database ledger is broken, you can't trust
the details about transactions that occurred after tampering until you verify them. Based
on that, the tampering can be potentially reverted by:
Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance
There are some considerations and limitations to be aware of when working with ledger
tables due to the nature of system-versioning and immutable data.
7 Note
A ledger database, a database with the ledger property set to on, can't be
converted to a regular database, with the ledger property set to off.
Automatic generation and storage of database digests is currently available in
Azure SQL Database, but not supported on SQL Server.
Automated digest management with ledger tables by using Azure Storage
immutable blobs doesn't offer the ability for users to use locally redundant storage
(LRS) accounts.
When a ledger database is created, all new tables created by default (without
specifying the APPEND_ONLY = ON clause) in the database will be updatable ledger
tables. To create append-only ledger tables, use the APPEND_ONLY = ON clause in the
CREATE TABLE (Transact-SQL) statements.
A transaction can update up to 200 ledger tables.
If the name of a history table is specified during history table creation, you must
specify the schema and table name and also the name of the ledger view.
By default, the history table is PAGE compressed.
If the current table is partitioned, the history table is created on the default file
group because partitioning configuration isn't replicated automatically from the
current table to the history table.
Temporal and history tables can't be a FILETABLE and can contain columns of any
supported datatype other than FILESTREAM. FILETABLE and FILESTREAM allow data
manipulation outside of SQL Server, and thus system versioning can't be
guaranteed.
A node or edge table can't be created as or altered to a temporal table. Graph isn't
supported with ledger.
While temporal tables support blob data types, such as (n)varchar(max) ,
varbinary(max) , (n)text , and image , they'll incur significant storage costs and
have performance implications due to their size. As such, when designing your
system, care should be taken when using these data types.
The history table must be created in the same database as the current table.
Temporal querying over Linked Server isn't supported.
The history table can't have constraints (Primary Key, Foreign Key, table, or column
constraints).
Online option ( WITH (ONLINE = ON ) has no effect on ALTER TABLE ALTER COLUMN in
case of system-versioned temporal table. ALTER COLUMN isn't performed as online
regardless of which value was specified for the ONLINE option.
INSERT and UPDATE statements can't reference the GENERATED ALWAYS columns.
Dropped history tables for updatable ledger tables are renamed using the
following format:
MSSQL_DroppedLedgerHistory_<dropped_history_table_name>_<GUID> .
7 Note
The name of dropped ledger tables, history tables and ledger views might be
truncated if the length of the renamed table or view exceeds 128 characters.
Altering Columns
Any changes that don't impact the underlying data of a ledger table are supported
without any special handling as they don't impact the hashes being captured in the
ledger. These changes includes:
Changing nullability
Collation for Unicode strings
The length of variable length columns
However, any operations that might affect the format of existing data, such as changing
the data type aren't supported.
Next steps
Ledger overview
Updatable ledger tables
Append-only ledger tables
Database ledger
Configure and manage content
reference - Azure SQL Database
Article • 02/07/2023
In this article you can find a content reference of various guides, scripts, and
explanations that can help you to manage and configure your Azure SQL Database.
Load data
Migrate to SQL Database
Learn how to manage SQL Database after migration.
Copy a database
Import a DB from a BACPAC
Export a DB to BACPAC
Load data with BCP
Load data with ADF
Configure features
Configure Azure Active Directory (Azure AD) auth
Configure Conditional Access
Azure AD Multi-Factor Authentication
Configure backup retention for a database to keep your backups on Azure Blob
Storage.
Configure geo-replication to keep a replica of your database in another region.
Configure auto-failover group to automatically fail over a group of single or
pooled databases to a secondary server in another region in the event of a
disaster.
Configure temporal retention policy
Configure TDE with BYOK
Rotate TDE BYOK keys
Remove TDE protector
Configure In-Memory OLTP
Configure Azure Automation
Configure transactional replication to replicate your date between databases.
Configure threat detection to let Azure SQL Database identify suspicious activities
such as SQL Injection or access from suspicious locations.
Configure dynamic data masking to protect your sensitive data.
Configure security for geo-replicas.
Extended events
Extended events
Store Extended events into event file
Store Extended events into ring buffer
Data sync
SQL Data Sync
Data Sync Agent
Replicate schema changes
Monitor with OMS
Best practices for Data Sync
Troubleshoot Data Sync
Database sharding
Upgrade elastic database client library.
Create sharded app.
Query horizontally sharded data.
Run Multi-shard queries.
Move sharded data.
Configure security in database shards.
Add a shard to the current set of database shards.
Fix shard map problems.
Migrate sharded DB.
Create counters.
Use entity framework to query sharded data.
Use Dapper framework to query sharded data.
Develop applications
Connectivity
Use Spark Connector
Authenticate app
Use batching for better performance
Connectivity guidance
DNS aliases
Setup DNS alias PowerShell
Ports - ADO.NET
C and C ++
Excel
Design applications
Design for disaster recovery
Design for elastic pools
Design for app upgrades
Next steps
Learn more about How-to guides for Azure SQL Managed Instance
Quickstart: Use Azure Data Studio to
connect and query Azure SQL Database
Article • 05/10/2023
In this quickstart, you'll use Azure Data Studio to connect to an Azure SQL Database
server. You'll then run Transact-SQL (T-SQL) statements to create and query the
TutorialDB database, which is used in other Azure Data Studio tutorials.
Prerequisites
To complete this quickstart, you need Azure Data Studio, and an Azure SQL Database
server.
If you don't have an Azure SQL server, complete one of the following Azure SQL
Database quickstarts. Remember the fully qualified server name and sign in credentials
for later steps:
Create DB - Portal
Create DB - CLI
Create DB - PowerShell
1. The first time you run Azure Data Studio the Welcome page should open. If you
don't see the Welcome page, select Help > Welcome. Select New Connection to
open the Connection pane:
2. This article uses SQL sign-in, but for Azure SQL Database, Azure AD Universal MFA
authentication is also supported. Fill in the following fields using the server name,
user name, and password for your Azure SQL server:
User name The server admin account The user name from the account used to
user name create the server.
Password (SQL The server admin account The password from the account used to
Login) password create the server.
Server Group Select <Default> You can set this field to a specific server
group you created.
3. Select Connect.
4. If your server doesn't have a firewall rule allowing Azure Data Studio to connect,
the Create new firewall rule form opens. Complete the form to create a new
firewall rule. For details, see Firewall rules.
1. Right-click on your Azure SQL server in the SERVERS sidebar and select New
Query.
SQL
IF NOT EXISTS (
SELECT name
FROM sys.databases
GO
GO
3. From the toolbar, select Run. Notifications appear in the MESSAGES pane showing
query progress.
Create a table
The query editor is connected to the master database, but we want to create a table in
the TutorialDB database.
Replace the previous query in the query editor with this one and select Run.
SQL
GO
);
GO
([CustomerId],[Name],[Location],[Email])
VALUES
GO
SQL
Clean up resources
Later quickstart articles build upon the resources created here. If you plan to work
through these articles, be sure not to delete these resources. Otherwise, in the Azure
portal, delete the resources you no longer need. For details, see Clean up resources.
Next steps
Now that you've successfully connected to an Azure SQL database and run a query, try
the Code editor tutorial.
Use Spring Data JDBC with Azure SQL
Database
Article • 04/19/2023
This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JDBC .
In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.
SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.
Prerequisites
An Azure subscription - create one for free .
Apache Maven .
Azure CLI.
sqlcmd Utility.
If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JDBC, and MS SQL Server Driver dependencies, and
then select Java version 8 or higher.
To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.
If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.
Passwordless (Recommended)
1. First, install the Service Connector passwordless extension for the Azure CLI:
Azure CLI
az extension add --name serviceconnector-passwordless --upgrade
2. Then, use the following command to create the Azure AD non-admin user:
Azure CLI
--resource-group <your-resource-group-name> \
--connection sql_conn \
--target-resource-group <your-resource-group-name> \
--server sqlservertest \
--database demo \
--user-account \
--query authInfo.userName \
--output tsv
The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.
) Important
To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:
XML
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-dependencies</artifactId>
<version>4.9.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
7 Note
XML
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter</artifactId>
</dependency>
Passwordless (Recommended)
properties
logging.level.org.springframework.jdbc.core=DEBUG
spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;
spring.sql.init.mode=always
2 Warning
SQL
3. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by Spring Boot. The following code ignores
the getters and setters methods.
Java
import org.springframework.data.annotation.Id;
public Todo() {
this.description = description;
this.details = details;
this.done = done;
@Id
Java
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.data.repository.CrudRepository;
import java.util.stream.Stream;
@SpringBootApplication
SpringApplication.run(DemoApplication.class, args);
@Bean
ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {
return event->repository
.forEach(System.out::println);
Tip
5. Start the application. The application stores data into the database. You'll see logs
similar to the following example:
shell
com.example.demo.Todo@4bdb04c8
Next steps
Azure for Spring developers
Use Spring Data JPA with Azure SQL
Database
Article • 04/19/2023
This tutorial demonstrates how to store data in Azure SQL Database using Spring Data
JPA .
The Java Persistence API (JPA) is the standard Java API for object-relational mapping.
In this tutorial, we include two authentication methods: Azure Active Directory (Azure
AD) authentication and SQL Database authentication. The Passwordless tab shows the
Azure AD authentication and the Password tab shows the SQL Database authentication.
SQL Database authentication uses accounts stored in SQL Database. If you choose to
use passwords as credentials for the accounts, these credentials will be stored in the
user table. Because these passwords are stored in SQL Database, you need to manage
the rotation of the passwords by yourself.
Prerequisites
An Azure subscription - create one for free .
Apache Maven .
Azure CLI.
sqlcmd Utility
If you don't have one, create an Azure SQL Server instance named sqlservertest
and a database named demo . For instructions, see Quickstart: Create a single
database - Azure SQL Database.
If you don't have a Spring Boot application, create a Maven project with the Spring
Initializr . Be sure to select Maven Project and, under Dependencies, add the
Spring Web, Spring Data JPA, and MS SQL Server Driver dependencies, and then
select Java version 8 or higher.
) Important
To be able to use your database, open the server's firewall to allow the local IP address
to access the database server. For more information, see Tutorial: Secure a database in
Azure SQL Database.
If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.
Passwordless (Recommended)
To use passwordless connections, see Tutorial: Secure a database in Azure SQL
Database or use Service Connector to create an Azure AD admin user for your
Azure SQL Database server, as shown in the following steps:
1. First, install the Service Connector passwordless extension for the Azure CLI:
Azure CLI
2. Then, use the following command to create the Azure AD non-admin user:
Azure CLI
--resource-group <your-resource-group-name> \
--connection sql_conn \
--target-resource-group <your-resource-group-name> \
--server sqlservertest \
--database demo \
--user-account \
--query authInfo.userName \
--output tsv
The Azure AD admin you created is an SQL database admin user, so you don't need
to create a new user.
) Important
To install the Spring Cloud Azure Starter module, add the following dependencies to
your pom.xml file:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-dependencies</artifactId>
<version>4.9.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
7 Note
XML
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter</artifactId>
</dependency>
Passwordless (Recommended)
properties
logging.level.org.hibernate.SQL=DEBUG
spring.datasource.url=jdbc:sqlserver://sqlservertest.database.windo
ws.net:1433;databaseName=demo;authentication=DefaultAzureCredential
;
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLSe
rver2016Dialect
spring.jpa.hibernate.ddl-auto=create-drop
2 Warning
2. Create a new Todo Java class. This class is a domain model mapped onto the todo
table that will be created automatically by JPA. The following code ignores the
getters and setters methods.
Java
package com.example.demo;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
@Entity
public Todo() {
this.description = description;
this.details = details;
this.done = done;
@Id
@GeneratedValue
Java
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.data.jpa.repository.JpaRepository;
import java.util.stream.Collectors;
import java.util.stream.Stream;
@SpringBootApplication
SpringApplication.run(DemoApplication.class, args);
@Bean
ApplicationListener<ApplicationReadyEvent>
basicsApplicationListener(TodoRepository repository) {
return event->repository
.forEach(System.out::println);
Tip
determines which method to use at runtime. This approach enables your app
to use different authentication methods in different environments (such as
local and production environments) without implementing environment-
specific code. For more information, see the Default Azure credential section
of Authenticate Azure-hosted Java applications.
4. Start the application. You'll see logs similar to the following example:
shell
com.example.demo.Todo@1f
Next steps
Azure for Spring developers
Use Spring Data R2DBC with Azure SQL
Database
Article • 05/26/2023
This article demonstrates creating a sample application that uses Spring Data R2DBC
to store and retrieve information in Azure SQL Database by using the R2DBC
implementation for Microsoft SQL Server from the r2dbc-mssql GitHub repository .
R2DBC brings reactive APIs to traditional relational databases. You can use it with
Spring WebFlux to create fully reactive Spring Boot applications that use non-blocking
APIs. It provides better scalability than the classic "one thread per connection" approach.
Prerequisites
An Azure subscription - create one for free .
Apache Maven .
Azure CLI.
sqlcmd Utility.
Bash
export AZ_RESOURCE_GROUP=database-workshop
export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
export AZ_LOCATION=<YOUR_AZURE_REGION>
export AZ_SQL_SERVER_ADMIN_USERNAME=spring
export AZ_SQL_SERVER_ADMIN_PASSWORD=<YOUR_AZURE_SQL_ADMIN_PASSWORD>
export AZ_SQL_SERVER_NON_ADMIN_USERNAME=nonspring
export AZ_SQL_SERVER_NON_ADMIN_PASSWORD=<YOUR_AZURE_SQL_NON_ADMIN_PASSWORD>
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this
article:
<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server, which should
but we recommend that you configure a region closer to where you live. You can
see the full list of available regions by using az account list-locations .
<AZ_SQL_SERVER_ADMIN_PASSWORD> and <AZ_SQL_SERVER_NON_ADMIN_PASSWORD> : The
password of your Azure SQL Database server, which should have a minimum of
eight characters. The characters should be from three of the following categories:
English uppercase letters, English lowercase letters, numbers (0-9), and non-
alphanumeric characters (!, $, #, %, and so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll
run your Spring Boot application. One convenient way to find it is to open
whatismyip.akamai.com .
Azure CLI
az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
--output tsv
7 Note
The MS SQL password has to meet specific criteria, and setup will fail with a non-
compliant password. For more information, see Password Policy.
Azure CLI
az sql server create \
--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME \
--location $AZ_LOCATION \
--admin-user $AZ_SQL_SERVER_ADMIN_USERNAME \
--admin-password $AZ_SQL_SERVER_ADMIN_PASSWORD \
--output tsv
Because you configured your local IP address at the beginning of this article, you can
open the server's firewall by running the following command:
Azure CLI
If you're connecting to your Azure SQL Database server from Windows Subsystem for
Linux (WSL) on a Windows computer, you need to add the WSL host ID to your firewall.
Obtain the IP address of your host machine by running the following command in WSL:
Bash
cat /etc/resolv.conf
Copy the IP address following the term nameserver , then use the following command to
set an environment variable for the WSL IP Address:
Bash
export AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
Then, use the following command to open the server's firewall to your WSL-based app:
Azure CLI
Azure CLI
az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
--output tsv
Create a SQL script called create_user.sql for creating a non-admin user. Add the
following contents and save it locally:
Bash
Then, use the following command to run the SQL script to create the non-admin user:
Bash
7 Note
For more information about creating SQL database users, see CREATE USER
(Transact-SQL).
Bash
XML
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-mssql</artifactId>
<scope>runtime</scope>
</dependency>
properties
logging.level.org.springframework.data.r2dbc=DEBUG
spring.r2dbc.url=r2dbc:pool:mssql://$AZ_DATABASE_NAME.database.windows.net:1
433/demo
spring.r2dbc.username=nonspring@$AZ_DATABASE_NAME
spring.r2dbc.password=$AZ_SQL_SERVER_NON_ADMIN_PASSWORD
7 Note
You should now be able to start your application by using the provided Maven wrapper
as follows:
Bash
./mvnw spring-boot:run
Java
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.core.io.ClassPathResource;
import
org.springframework.data.r2dbc.connectionfactory.init.ConnectionFactoryIniti
alizer;
import
org.springframework.data.r2dbc.connectionfactory.init.ResourceDatabasePopula
tor;
import io.r2dbc.spi.ConnectionFactory;
@SpringBootApplication
public class DemoApplication {
@Bean
public ConnectionFactoryInitializer initializer(ConnectionFactory
connectionFactory) {
ConnectionFactoryInitializer initializer = new
ConnectionFactoryInitializer();
initializer.setConnectionFactory(connectionFactory);
ResourceDatabasePopulator populator = new
ResourceDatabasePopulator(new ClassPathResource("schema.sql"));
initializer.setDatabasePopulator(populator);
return initializer;
}
}
This Spring bean uses a file called schema.sql, so create that file in the
src/main/resources folder, and add the following text:
SQL
Stop the running application, and start it again using the following command. The
application will now use the demo database that you created earlier, and create a todo
table inside it.
Bash
./mvnw spring-boot:run
Create a new Todo Java class, next to the DemoApplication class, using the following
code:
Java
package com.example.demo;
import org.springframework.data.annotation.Id;
public Todo() {
}
@Id
private Long id;
This class is a domain model mapped on the todo table that you created before.
To manage that class, you need a repository. Define a new TodoRepository interface in
the same package, using the following code:
Java
package com.example.demo;
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
Finish the application by creating a controller that can store and retrieve data.
Implement a TodoController class in the same package, and add the following code:
Java
package com.example.demo;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
@RestController
@RequestMapping("/")
public class TodoController {
@PostMapping("/")
@ResponseStatus(HttpStatus.CREATED)
public Mono<Todo> createTodo(@RequestBody Todo todo) {
return todoRepository.save(todo);
}
@GetMapping("/")
public Flux<Todo> getTodos() {
return todoRepository.findAll();
}
}
Finally, halt the application and start it again using the following command:
Bash
./mvnw spring-boot:run
Test the application
To test the application, you can use cURL.
First, create a new "todo" item in the database using the following command:
Bash
JSON
Next, retrieve the data by using a new cURL request with the following command:
Bash
curl http://127.0.0.1:8080
This command will return the list of "todo" items, including the item you've created, as
shown here:
JSON
Congratulations! You've created a fully reactive Spring Boot application that uses R2DBC
to store and retrieve data from Azure SQL Database.
Clean up resources
To clean up all resources used during this quickstart, delete the resource group by using
the following command:
Azure CLI
az group delete \
--name $AZ_RESOURCE_GROUP \
--yes
Next steps
To learn more about deploying a Spring Data application to Azure Spring Apps and
using managed identity, see Tutorial: Deploy a Spring application to Azure Spring Apps
with a passwordless connection to an Azure database.
To learn more about Spring and Azure, continue to the Spring on Azure documentation
center.
Spring on Azure
See also
For more information about Spring Data R2DBC, see Spring's reference
documentation .
For more information about using Azure with Java, see Azure for Java developers and
Working with Azure DevOps and Java.
Create and use append-only ledger
tables
Article • 05/23/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
This article shows you how to create an append-only ledger table. Next, you'll insert
values in your append-only ledger table and then attempt to make updates to the data.
Finally, you'll view the results by using the ledger view. We'll use an example of a card
key access system for a facility, which is an append-only system pattern. Our example
will give you a practical look at the relationship between the append-only ledger table
and its corresponding ledger view.
Prerequisites
SQL Server Management Studio or Azure Data Studio.
Timestamp datetime2 The date and time the employee accessed the
building
1. Use SQL Server Management Studio or Azure Data Studio to create a new schema
and table called [AccessControl].[KeyCardEvents] .
SQL
GO
SQL
3. View the contents of your KeyCardEvents table, and specify the GENERATED
ALWAYS columns that are added to your append-only ledger table.
SQL
SELECT *
,[ledger_start_transaction_id]
,[ledger_start_sequence_number]
FROM [AccessControl].[KeyCardEvents];
4. View the contents of your KeyCardEvents ledger view along with the ledger
transactions system view to identify who added records into the table.
SQL
SELECT
t.[commit_time] AS [CommitTime]
, t.[principal_name] AS [UserName]
, l.[EmployeeID]
, l.[AccessOperationDescription]
, l.[Timestamp]
, l.[ledger_operation_type_desc] AS Operation
FROM [AccessControl].[KeyCardEvents_Ledger] l
JOIN sys.database_ledger_transactions t
ON t.transaction_id = l.ledger_transaction_id
5. Try to update the KeyCardEvents table by changing the EmployeeID from 43869 to
34184.
SQL
You'll receive an error message that states the updates aren't allowed for your
append-only ledger table.
Permissions
Creating append-only ledger tables requires the ENABLE LEDGER permission. For more
information on permissions related to ledger tables, see Permissions.
Next steps
Append-only ledger tables
How to migrate data from regular tables to ledger tables
Create and use updatable ledger tables
Article • 05/24/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
This article shows you how to create an updatable ledger table. Next, you'll insert values
in your updatable ledger table and then make updates to the data. Finally, you'll view
the results by using the ledger view. We'll use an example of a banking application that
tracks banking customers' balances in their accounts. Our example will give you a
practical look at the relationship between the updatable ledger table and its
corresponding history table and ledger view.
Prerequisites
SQL Server Management Studio or Azure Data Studio.
1. Use SQL Server Management Studio or Azure Data Studio to create a new schema
and table called [Account].[Balance] .
SQL
GO
WITH
LEDGER = ON
);
7 Note
2. When your updatable ledger table is created, the corresponding history table and
ledger view are also created. Run the following T-SQL commands to see the new
table and the new view.
SQL
SELECT
FROM sys.tables AS t
3. Insert the name Nick Jones as a new customer with an opening balance of $50.
SQL
4. Insert the names John Smith , Joe Smith , and Mary Michaels as new customers
with opening balances of $500, $30, and $200, respectively.
SQL
INSERT INTO [Account].[Balance]
5. View the [Account].[Balance] updatable ledger table, and specify the GENERATED
ALWAYS columns added to the table.
SQL
SELECT [CustomerID]
,[LastName]
,[FirstName]
,[Balance]
,[ledger_start_transaction_id]
,[ledger_end_transaction_id]
,[ledger_start_sequence_number]
,[ledger_end_sequence_number]
FROM [Account].[Balance];
In the results window, you'll first see the values inserted by your T-SQL commands,
along with the system metadata that's used for data lineage purposes.
SQL
WHERE [CustomerID] = 1;
7. View the [Account].[Balance] ledger view, along with the transaction ledger
system view to identify users that made the changes.
SQL
SELECT
t.[commit_time] AS [CommitTime]
, t.[principal_name] AS [UserName]
, l.[CustomerID]
, l.[LastName]
, l.[FirstName]
, l.[Balance]
, l.[ledger_operation_type_desc] AS Operation
FROM [Account].[Balance_Ledger] l
JOIN sys.database_ledger_transactions t
ON t.transaction_id = l.ledger_transaction_id
Tip
We recommend that you query the history of changes through the ledger
view and not the history table.
Nick 's account balance was successfully updated in the updatable ledger table to
100 .
The ledger view shows that updating the ledger table is a DELETE of the original
row with 50 . The balance with a corresponding INSERT of a new row with 100
shows the new balance for Nick .
Permissions
Creating updatable ledger tables requires the ENABLE LEDGER permission. For more
information on permissions related to ledger tables, see Permissions.
Next steps
Database ledger
Updatable ledger tables
Append-only ledger tables
How to migrate data from regular tables to ledger tables
Migrate data from regular tables to
ledger tables
Article • 05/23/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
Converting regular tables to ledger tables isn't possible, but you can migrate the data
from an existing regular table to a ledger table, and then replace the original table with
the ledger table.
When you're performing a database ledger verification, the process needs to order all
operations within each transaction. If you use a SELECT INTO or BULK INSERT statement
to copy a few billion rows from a regular table to a ledger table, it will all be done in one
single transaction. This means lots of data needs to be fully sorted, which will be done in
a single thread. The sorting operation takes a long time to complete.
To convert a regular table into a ledger table, Microsoft recommends using the
sys.sp_copy_data_in_batches stored procedure. This splits the copy operation in batches
of 10-100 K rows per transaction. As a result, the database ledger verification has
smaller transactions that can be sorted in parallel. This helps the time of the database
ledger verification tremendously.
7 Note
The customer can still use other commands, services, or tools to copy the data from
the source table to the target table. Make sure you avoid large transactions
because this will have a performance impact on the database ledger verification.
This article shows you how can convert a regular table into a ledger table.
Prerequisites
SQL Server Management Studio or Azure Data Studio.
SQL
);
The easiest way to create an append-only ledger table or updatable ledger table is
scripting the original table and add the LEDGER = ON clause. In the script below, we're
creating a new updatable ledger table, called Employees_LedgerTable based on the
schema of the Employees table.
SQL
WITH
SYSTEM_VERSIONING = ON,
LEDGER = ON
);
In the script below, we're copying the data from the regular Employees table to the new
updatable ledger table, Employees_LedgerTable .
SQL
Next steps
Append-only ledger tables
Updatable ledger tables
Configure a ledger database
Article • 07/14/2023
Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL
Managed Instance
This article provides information on configuring a ledger database using the Azure
portal, T-SQL, PowerShell, or the Azure CLI for Azure SQL Database. For information on
creating a ledger database in SQL Server 2022 (16.x) or Azure SQL Managed Instance,
use the switch at the top of this page.
Prerequisites
Have an active Azure subscription. If you don't have one, create a free account .
A logical server.
7 Note
Enabling the ledger functionality at the database level will make all tables in this
database updatable ledger tables. This option cannot be changed after the
database is created. Creating a table with the option LEDGER = OFF will throw an
error message.
Portal
Next steps
Ledger overview
Append-only ledger tables
Updatable ledger tables
Enable automatic digest storage
Verify a ledger table to detect
tampering
Article • 03/03/2023
Applies to:
SQL Server 2022 (16.x)
Azure SQL Database
Azure SQL Managed
Instance
In this article, you'll verify the integrity of the data in your ledger tables. If you've
configured the Automatic digest storage on your database, follow the T-SQL using
automatic digest storage section. Otherwise, follow the T-SQL using a manual generated
digest section.
Prerequisites
Have an active Azure subscription if you're using Azure SQL Database or Azure SQL
Managed Instance. If you don't have one, create a free account .
Create and use updatable ledger tables or create and use append-only ledger
tables.
SQL Server Management Studio or Azure Data Studio.
The database option ALLOW_SNAPSHOT_ISOLATION has to be enabled on the
database before you can run the verifcation stored procedures.
SQL
BEGIN TRY
EXEC sys.sp_verify_database_ledger_from_digest_storage
@digest_locations;
END TRY
BEGIN CATCH
THROW;
END CATCH
7 Note
The verification script can also be found in the Azure portal. Open the
Azure portal and locate the database you want to verify. In Security,
select the Ledger option. In the Ledger pane, select </> Verify database.
3. Execute the query. You'll see that digest_locations returns the current location
of where your database digests are stored and any previous locations. Result
returns the success or failure of ledger verification.
4. Open the digest_locations result set to view the locations of your digests. The
following example shows two digest storage locations for this database:
JSON
"path":
"https:\/\/digest1.blob.core.windows.net\/sqldbledgerdigests\/
janderstestportal2server\/jandersnewdb\/2021-05-
20T04:39:47.6570000",
"last_digest_block_id": 10016,
"is_current": true
},
"path": "https:\/\/jandersneweracl.confidential-
ledger.azure.com\/sqldbledgerdigests\/janderstestportal2server
\/jandersnewdb\/2021-05-20T04:39:47.6570000",
"last_digest_block_id": 1704,
"is_current": false
) Important
Output
Output
Next steps
Ledger overview
sys.database_ledger_digest_locations
sp_verify_database_ledger_from_digest_storage
sp_verify_database_ledger
sp_generate_database_ledger_digest
Enable vulnerability assessment on your
Azure SQL databases
Article • 05/18/2023
In this article, you'll learn how to enable vulnerability assessment so you can find and
remediate database vulnerabilities. We recommend that you enable vulnerability
assessment using the express configuration so you aren't dependent on a storage
account. You can also enable vulnerability assessment using the classic configuration.
When you enable the Defender for Azure SQL plan in Defender for Cloud, Defender for
Cloud automatically enables Advanced Threat Protection and vulnerability assessment
with the express configuration for all Azure SQL databases in the selected subscription.
If you have Azure SQL databases with vulnerability assessment disabled, you can
enable vulnerability assessment in the express or classic configuration.
If you have Azure SQL databases with vulnerability assessment enabled in the
classic configuration, you can enable the express configuration so that assessments
don't require a storage account.
Prerequisites
Make sure that Microsoft Defender for Azure SQL is enabled so that you can run
scans on your Azure SQL databases.
Make sure you read and understand the differences between the express and
classic configuration.
Express configuration
Classic configuration
Express configuration
To enable vulnerability assessment without a storage account, using the express
configuration:
) Important
Now you can go to the SQL databases should have vulnerability findings resolved
recommendation to see the vulnerabilities found in your databases. You can also run
on-demand vulnerability assessment scans to see the current findings.
7 Note
Each database is randomly assigned a scan time on a set day of the week.
If you have SQL resources that don't have Advanced Threat Protection and vulnerability
assessment enabled, you can use the SQL vulnerability assessment APIs to enable SQL
vulnerability assessment with the express configuration at scale.
Classic configuration
To enable vulnerability assessment with a storage account, use the classic configuration:
1. In the Azure portal , open the specific resource in Azure SQL Database, SQL
Managed Instance Database, or Azure Synapse.
4. In the Server settings page, enter the Microsoft Defender for SQL settings:
a. Configure a storage account where your scan results for all databases on the
server or managed instance will be stored. For information about storage
accounts, see About Azure storage accounts.
7 Note
Each database is randomly assigned a scan time on a set day of the week.
Email notifications are scheduled randomly per server on a set day of the
week. The email notification report includes data from all recurring
database scans that were executed during the preceding week (does not
include on-demand scans).
Next steps
Learn more about:
Microsoft Defender for Cloud provides vulnerability assessment for your Azure SQL
databases. Vulnerability assessment scans your databases for software vulnerabilities
and provides a list of findings. You can use the findings to remediate software
vulnerabilities and disable findings.
Prerequisites
Make sure that you know whether you're using the express or classic configurations
before you continue.
1. In the Azure portal , open the specific resource in Azure SQL Database, SQL
Managed Instance Database, or Azure Synapse.
2. Under the Security heading, select Defender for Cloud.
3. In the Enablement Status, select Configure to open the Microsoft Defender for
SQL settings pane for either the entire server or managed instance.
If the vulnerability settings show the option to configure a storage account, you're using
the classic configuration. If not, you're using the express configuration.
Express configuration
Classic configuration
Express configuration
Express configuration doesn't store scan results if they're identical to previous scans. The
scan time shown in the scan history is the time of the last scan where the scan results
changed.
Disable specific findings from Microsoft Defender for
Cloud (preview)
If you have an organizational need to ignore a finding rather than remediate it, you can
disable the finding. Disabled findings don't impact your secure score or generate
unwanted noise. You can see the disabled finding in the "Not applicable" section of the
scan results.
When a finding matches the criteria you've defined in your disable rules, it won't appear
in the list of findings. Typical scenarios may include:
) Important
To disable specific findings, you need permissions to edit a policy in Azure Policy.
Learn more in Azure RBAC permissions in Azure Policy.
To create a rule:
3. Define your criteria. You can use any of the following criteria:
Finding ID
Severity
Benchmarks
Use the following ARM template to create a new Azure SQL Logical Server with
express configuration for SQL vulnerability assessment.
Here are several examples to how you can set up baselines using ARM templates:
JSON
"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines"
,
"apiVersion": "2022-02-01-preview",
"name": "[concat(parameters('serverName'),'/',
parameters('databaseName') , '/default/default')]",
"properties": {
"latestScan": true
JSON
"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines"
,
"apiVersion": "2022-02-01-preview",
"name": "[concat(parameters('serverName'),'/',
parameters('databaseName') , '/default/default')]",
"properties": {
"latestScan": false,
"results": {
"VA2065": [
"FirewallRuleName3",
"62.92.15.67",
"62.92.15.67"
],
"FirewallRuleName4",
"62.92.15.68",
"62.92.15.68"
],
"VA2130": [
"dbo"
JSON
"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/
rules",
"apiVersion": "2022-02-01-preview",
"name": "[concat(parameters('serverName'),'/',
parameters('databaseName') , '/default/default/VA1143')]",
"properties": {
"latestScan": false,
"results": [
[ "True" ]
Set up batch baselines on the master database based on latest scan results:
JSON
"type":
"Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines"
,
"apiVersion": "2022-02-01-preview",
"name": "
[concat(parameters('serverName'),'/master/default/default')]",
"properties": {
"latestScan": true
Using PowerShell
Express configuration isn't supported in PowerShell cmdlets but you can use PowerShell
to invoke the latest vulnerability assessment capabilities using REST API, for example:
Troubleshooting
1. Disable the Defender for Azure SQL plan from the Azure portal.
PowerShell
Update-AzSqlServerAdvancedThreatProtectionSetting `
-ResourceGroupName "demo-rg" `
-ServerName "dbsrv1" `
-Enable 1
Update-AzSqlServerVulnerabilityAssessmentSetting `
-ResourceGroupName "demo-rg" `
-ServerName "dbsrv1" `
-StorageAccountName "mystorage" `
-RecurringScansInterval Weekly `
-ScanResultsContainerName "vulnerability-assessment"
Errors
“Vulnerability Assessment is enabled on this server or one of its underlying databases
with an incompatible version”
Possible causes:
Solution: Try again to enable the express configuration. If the issue persists, try to
disable the Microsoft Defender for SQL in the Azure SQL resource, select Save,
enable Microsoft Defender for SQL again, and select Save.
Solution: Disable all database policies for the relevant server and then try to switch
to express configuration again.
Cosnider using the provided PowerShell script for
assistance.
Classic configuration
When a finding matches the criteria you've defined in your disable rules, it won't appear
in the list of findings.
Typical scenarios may include:
) Important
To create a rule:
3. Define your criteria. You can use any of the following criteria:
Finding ID
Severity
Benchmarks
4. Select Apply rule. Changes might take up to 24 hrs to take effect.
b. From the scope list, subscriptions with active rules show as Rule applied.
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
) Important
The PowerShell Azure Resource Manager module is still supported, but all future
development is for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The
arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.
You can use Azure PowerShell cmdlets to programmatically manage your vulnerability
assessments. The supported cmdlets are:
For a script example, see Azure SQL vulnerability assessment PowerShell support.
Azure CLI
) Important
The following Azure CLI commands are for SQL databases hosted on VMs or on-
premises machines. For vulnerability assessments regarding Azure SQL Databases,
refer to the Azure portal or PowerShell section.
You can use Azure CLI commands to programmatically manage your vulnerability
assessments. The supported commands are:
az security va sql baseline View SQL vulnerability assessment baseline for all rules.
list
az security va sql baseline Sets SQL vulnerability assessment baseline. Replaces the current
set baseline.
az security va sql baseline Update SQL vulnerability assessment rule baseline. Replaces the
update current rule baseline.
az security va sql results View all SQL vulnerability assessment scan results.
list
az security va sql scans list List all SQL vulnerability assessment scan summaries.
Ensure that you have enabled vulnerabilityAssessments before you add baselines.
Here's an example for defining Baseline Rule VA2065 to master database and VA1143 to
user database as resources in a Resource Manager template:
JSON
"resources": [
"type": "Microsoft.Sql/servers/databases/vulnerabilityAapiVersion":
"2018-06-01",
"name": "[concat(parameters('server_name'),'/',
parameters('database_name') , '/default/VA2065/master')]",
"properties": {
"baselineResults": [
"result": [
"FirewallRuleName3",
"StartIpAddress",
"EndIpAddress"
},
"result": [
"FirewallRuleName4",
"62.92.15.68",
"62.92.15.68"
},
"type": "Microsoft.Sql/servers/databases/vulnerabilityAapiVersion":
"2018-06-01",
"name": "[concat(parameters('server_name'),'/',
parameters('database_name'), '/default/VA2130/Default')]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments',
parameters('server_name'), 'Default')]"
],
"properties": {
"baselineResults": [
"result": [
"dbo"
For master database and user database, the resource names are defined differently:
To handle Boolean types as true/false, set the baseline result with binary input like
"1"/"0".
JSON
"type": "Microsoft.Sql/servers/databases/vulnerabilityapiVersion":
"2018-06-01",
"name": "[concat(parameters('server_name'),'/',
parameters('database_name'), '/default/VA1143/Default')]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments',
parameters('server_name'), 'Default')]"
],
"properties": {
"baselineResults": [
"result": [
"1"
Next steps
Learn more about Microsoft Defender for Azure SQL.
Learn more about data discovery and classification.
Learn more about storing vulnerability assessment scan results in a storage
account accessible behind firewalls and VNets.
Check out common questions about Azure SQL databases.
Find and remediate vulnerabilities in
your Azure SQL databases
Article • 05/10/2023
Microsoft Defender for Cloud provides vulnerability assessment for your Azure SQL
databases. Vulnerability assessment scans your databases for software vulnerabilities
and provides a list of findings. You can use the findings to remediate software
vulnerabilities and disable findings.
Prerequisites
Make sure that you know whether you're using the express or classic configurations
before you continue.
1. In the Azure portal , open the specific resource in Azure SQL Database, SQL
Managed Instance Database, or Azure Synapse.
2. Under the Security heading, select Defender for Cloud.
3. In the Enablement Status, select Configure to open the Microsoft Defender for
SQL settings pane for either the entire server or managed instance.
If the vulnerability settings show the option to configure a storage account, you're using
the classic configuration. If not, you're using the express configuration.
Permissions
One of the following permissions is required to see vulnerability assessment results
in the Microsoft Defender for Cloud recommendation SQL databases should have
vulnerability findings resolved:
Security Admin
Security Reader
The following permissions are required to changes vulnerability assessment
settings:
If you're receiving any automated emails with links to scan results the following
permissions are required to access the links about scan results or to view scan
results at the resource-level:
Data residency
SQL vulnerability assessment queries the SQL server using publicly available queries
under Defender for Cloud recommendations for SQL vulnerability assessment, and
stores the query results. SQL vulnerability assessment data is stored in the location
of the logical server it's configured on. For example, if the user enabled vulnerability
assessment on a logical server in West Europe, the results will be stored in West
Europe. This data will be collected only if the SQL vulnerability assessment solution
is configured on the logical server.
1. From the resource's Defender for Cloud page, select View additional findings
in Vulnerability Assessment to access the scan results from previous scans.
2. To run an on-demand scan to scan your database for vulnerabilities, select
Scan from the toolbar:
7 Note
The scan is lightweight and safe. It takes a few seconds to run and is entirely
read-only. It doesn't make any changes to your database.
Remediate vulnerabilities
When a vulnerability scan completes, the report is displayed in the Azure portal. The
report presents:
1. Review your results and determine which of the report's findings are true
security issues for your environment.
2. Select each failed result to understand its impact and why the security check
failed.
Tip
3. As you review your assessment results, you can mark specific results as being
an acceptable baseline in your environment. A baseline is essentially a
customization of how the results are reported. In subsequent scans, results
that match the baseline are considered as passes. After you've established
your baseline security state, vulnerability assessment only reports on
deviations from the baseline. In this way, you can focus your attention on the
relevant issues.
4. Any findings you've added to the baseline will now appear as Passed with an
indication that they've passed because of the baseline changes. There's no
need to run another scan for the baseline to take effect.
Your vulnerability assessment scans can now be used to ensure that your database
maintains a high level of security, and that your organizational policies are met.
Next steps
Learn more about Microsoft Defender for Azure SQL.
Learn more about data discovery and classification.
Learn more about storing vulnerability assessment scan results in a storage
account accessible behind firewalls and VNets.
SQL information protection policy in
Microsoft Defender for Cloud
Article • 04/13/2023
Labels – The main classification attributes, used to define the sensitivity level of the
data stored in the column.
Information Types – Provides additional granularity into the type of data stored in
the column.
The information protection policy options within Defender for Cloud provide a
predefined set of labels and information types which serve as the defaults for the
classification engine. You can customize the policy, according to your organization's
needs, as described below.
7 Note
This option only appears for users with tenant-level permissions. Grant tenant-
wide permissions to yourself.
3. You can also modify the built-in types by adding additional search pattern strings,
disabling some of the existing strings, or by changing the description.
Tip
4. Information types are listed in order of ascending discovery ranking, meaning that
the types higher in the list will attempt to match first. To change the ranking
between information types, drag the types to the right spot in the table, or use the
Move up and Move down buttons to change the order.
5. Select OK when you are done.
6. After you completed managing your information types, be sure to associate the
relevant types with the relevant labels, by clicking Configure for a particular label,
and adding or deleting information types as appropriate.
7 Note
Permissions
To customize the information protection policy for your Azure tenant, you'll need the
following actions on the tenant's root management group:
Microsoft.Security/informationProtectionPolicies/read
Microsoft.Security/informationProtectionPolicies/write
Next steps
In this article, you learned about defining an information protection policy in Microsoft
Defender for Cloud. To learn more about using SQL Information Protection to classify
and protect sensitive data in your SQL databases, see Azure SQL Database Data
Discovery and Classification.
For more information on security policies and data security in Defender for Cloud, see
the following articles:
Setting security policies in Microsoft Defender for Cloud: Learn how to configure
security policies for your Azure subscriptions and resource groups
Microsoft Defender for Cloud data security: Learn how Defender for Cloud
manages and safeguards data
SQL vulnerability assessment rules
reference guide
Article • 12/29/2022
This article lists the set of built-in rules that are used to flag security vulnerabilities and
highlight deviations from best practices, such as misconfigurations and excessive
permissions. The rules are based on Microsoft's best practices and focus on the security
issues that present the biggest risks to your database and its valuable data. They cover
both database-level issues as well as server-level security issues, like server firewall
settings and server-level permissions. These rules also represent many of the
requirements from various regulatory bodies to meet their compliance standards.
Applies to:
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse
Analytics
SQL Server (all supported versions)
The rules shown in your database scans depend on the SQL version and platform that
was scanned.
For a list of changes to these rules, see SQL vulnerability assessment rules changelog.
Rule categories
SQL vulnerability assessment rules have five categories, which are in the following
sections:
1 SQL Server 2012+ refers to all versions of SQL Server 2012 and above.
2 SQL Server 2017+ refers to all versions of SQL Server 2017 and above.
3 SQL Server 2016+ refers to all versions of SQL Server 2016 and above.
VA1020 Database user GUEST High The guest user permits access to a SQL
should not be a database for any logins that are not Server
member of any role mapped to a specific database user. 2012+
VA1043 Principal GUEST should Medium The guest user permits access to a SQL
not have access to any database for any logins that are not Server
user database mapped to a specific database user. 2012+
VA1047 Password expiration Low Password expiration policies are used SQL
check should be to manage the lifespan of a password. Server
enabled for all SQL When SQL Server enforces password 2012+
sysadmin
SQL
Managed
Instance
disabled.
SQL
Managed
Instance
VA1054 Excessive permissions Low Every SQL Server login belongs to the SQL
should not be granted public server role. When a server Server
to PUBLIC role on principal has not been granted or 2012+
SQL
Managed
Instance
xp_cmdshell is disabled.
SQL
Managed
Instance
VA1067 Database Mail XPs Medium This rule checks that Database Mail is SQL
should be disabled disabled when no database mail Server
when it is not in use profile is configured. Database Mail 2012+
can be used for sending e-mail
messages from the SQL Server
Database Engine and is disabled by
default. If you are not using this
feature, it is recommended to disable
it to reduce the surface area.
VA1070 Database users Low Database users may share the same SQL
shouldn't share the name as a server login. This rule Server
same name as a server validates that there are no such users. 2012+
login
SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity
VA1072 Authentication mode Medium There are two possible authentication SQL
should be Windows modes: Windows Authentication Server
Authentication mode and mixed mode. Mixed mode 2012+
means that SQL Server enables both
Windows authentication and SQL
Server authentication. This rule checks
that the authentication mode is set to
Windows Authentication.
VA1094 Database permissions Low Permissions are rules associated with SQL
shouldn't be granted a securable object to regulate which Server
directly to principals users can gain access to the object. 2012+
VA1095 Excessive permissions Medium Every SQL Server login belongs to the SQL
should not be granted public server role. When a server Server
to PUBLIC role principal has not been granted or 2012+
VA1096 Principal GUEST should Low Each database includes a user called SQL
not be granted GUEST. Permissions granted to GUEST Server
permissions in the are inherited by users who have 2012+
VA1097 Principal GUEST should Low Each database includes a user called SQL
not be granted GUEST. Permissions granted to GUEST Server
permissions on objects are inherited by users who have 2012+
VA1099 GUEST user should not Low Each database includes a user called SQL
be granted permissions GUEST. Permissions granted to GUEST Server
on database securables are inherited by users who have 2012+
VA1267 Contained users should Medium Contained users are users that exist SQL
use Windows within the database and do not Server
Authentication require a login mapping. This rule 2012+
VA1280 Server Permissions Medium Every SQL Server login belongs to the SQL
granted to public public server role. When a server Server
should be minimized principal has not been granted or 2012+
VA1282 Orphan roles should be Low Orphan roles are user-defined roles SQL
removed that have no members. Eliminate Server
orphaned roles as they are not 2012+
SQL
Database
Azure
Synapse
Rule ID Rule Title Rule Rule Description Platform
Severity
VA2020 Minimal set of High Every SQL Server securable has SQL
principals should be permissions associated with it that Server
granted ALTER or can be granted to principals. 2012+
VA2033 Minimal set of Low This rule checks which principals are SQL
principals should be granted EXECUTE permission on Server
granted database- objects or columns to ensure this 2012+
VA2103 Unnecessary execute Medium Extended stored procedures are DLLs SQL
permissions on that an instance of SQL Server can Server
extended stored dynamically load and run. SQL Server 2012+
VA2107 Minimal set of High SQL Database provides two restricted SQL
principals should be administrative roles in the master Database
VA2108 Minimal set of High SQL Server provides roles to help SQL
principals should be manage the permissions. Roles are Server
members of fixed high security principals that group other 2012+
Azure
Synapse
VA2109 Minimal set of Low SQL Server provides roles to help SQL
principals should be manage the permissions. Roles are Server
members of fixed low security principals that group other 2012+
Azure
Synapse
Rule ID Rule Title Rule Rule Description Platform
Severity
VA2114 Minimal set of High SQL Server provides roles to help SQL
principals should be manage permissions. Roles are Server
members of high security principals that group other 2012+
VA2129 Changes to signed High You can sign a stored procedure, SQL
modules should be function, or trigger with a certificate Server
authorized or an asymmetric key. This is 2012+
VA2130 Track all users with Low This check tracks all users with access SQL
access to the database to a database. Make sure that these Database
VA2201 SQL logins with High This rule checks the accounts with SQL
commonly used names database owner permission for Server
should be disabled commonly used names. Assigning 2012+
commonly used names to accounts
with database owner permission
increases the likelihood of successful
brute force attacks.
VA1091 Auditing of both Low SQL Server Login auditing configuration SQL
successful and enables administrators to track the users Server
failed login logging into SQL Server instances. If the user 2012+
attempts chooses to count on 'Login auditing' to track
(default trace) users logging into SQL Server instances,
should be then it is important to enable it for both
enabled when successful and failed login attempts.
'Login auditing'
is set up to track
logins
VA1093 Maximum Low Each SQL Server Error log will have all the SQL
number of error information related to failures / errors that Server
logs should be have occurred since SQL Server was last 2012+
12 or more restarted or since the last time you have
recycled the error logs. This rule checks that
the maximum number of error logs is 12 or
more.
Rule ID Rule Title Rule Rule Description Platform
Severity
VA1264 Auditing of both Low SQL Server auditing configuration enables SQL
successful and administrators to track the users logging Server
failed login into SQL Server instances that they're 2012+
VA1265 Auditing of both Medium SQL Server auditing configuration enables SQL
successful and administrators to track users logging to SQL Server
failed login Server instances that they're responsible for. 2012+
VA1281 All memberships Medium User-defined roles are security principals SQL
for user-defined defined by the user to group principals to Server
roles should be easily manage permissions. Monitoring 2012+
Azure
Synapse
Rule ID Rule Title Rule Rule Description Platform
Severity
VA1283 There should be Low Auditing an instance of the SQL Server SQL
at least 1 active Database Engine or an individual database Server
audit in the involves tracking and logging events that 2012+
VA2061 Auditing should High Azure SQL Database Auditing tracks SQL
be enabled at database events and writes them to an audit Database
Data Protection
Rule ID Rule Title Rule Rule Description Platform
Severity
VA1098 Any Existing High Service Broker and Mirroring endpoints SQL
SSB or support different encryption algorithms Server
Mirroring including no-encryption. This rule checks that 2012+
endpoint any existing endpoint requires AES
should require encryption.
AES connection
Rule ID Rule Title Rule Rule Description Platform
Severity
database.
SQL
Database
Azure
Synapse
VA1220 Database High Microsoft SQL Server can use Secure Sockets SQL
communication Layer (SSL) or Transport Layer Security (TLS) to Server
using TDS encrypt data that is transmitted across a 2012+
VA1221 Database High SQL Server uses encryption keys to help SQL
Encryption secure data credentials and connection Server
Symmetric information that is stored in a server 2012+
SQL
Database
Azure
Synapse
VA1223 Certificate keys High Certificate keys are used in RSA and other SQL
should use at encryption algorithms to protect data. These Server
least 2048 bits keys need to be of enough length to secure 2012+
SQL
Database
Azure
Synapse
VA1224 Asymmetric High Database asymmetric keys are used in many SQL
keys' length encryption algorithms these keys need to be Server
should be at of enough length to secure the encrypted 2012
least 2048 bits data this rule checks that all asymmetric keys
stored in the database are of length of at SQL
least 2048 bits Server
2014
SQL
Database
VA1279 Force High When the Force Encryption option for the SQL
encryption Database Engine is enabled all Server
should be communications between client and server is 2012+
enabled for encrypted regardless of whether the 'Encrypt
TDS connection' option (such as from SSMS) is
checked or not. This rule checks that Force
Encryption option is enabled.
SQL
Server
2012
SQL
Server
2014
SQL
Server
2016
SQL
Server
2017
Server 2012
SQL
Database
Azure
Synapse
Surface Area Reduction
Rule ID Rule Title Rule Rule Description Platform
Severity
VA1022 Ad hoc Medium Ad hoc distributed queries use the OPENROWSET SQL
distributed and OPENDATASOURCE functions to connect to Server
queries remote data sources that use OLE DB. This rule 2012+
should be checks that ad hoc distributed queries are
disabled disabled.
VA1023 CLR should High The CLR allows managed code to be hosted by SQL
be disabled and run in the Microsoft SQL Server Server
environment. This rule checks that CLR is 2012+
disabled.
VA1026 CLR should Medium The CLR allows managed code to be hosted by SQL
be disabled and run in the Microsoft SQL Server Server
environment. CLR strict security treats SAFE and 2017+2
VA1044 Remote Medium This rule checks that remote dedicated admin SQL
Admin connections are disabled if they are not being Server
Connections used for clustering to reduce attack surface 2012+
VA1051 AUTO_CLOSE Medium The AUTO_CLOSE option specifies whether the SQL
should be database shuts down gracefully and frees Server
disabled on resources after the last user disconnects. 2012+
all databases Regardless of its benefits it can cause denial of
service by aggressively opening and closing the
database, thus it is important to keep this
feature disabled. This rule checks that this
option is disabled on the current database.
VA1066 Unused Low Service Broker provides queuing and reliable SQL
service messaging for SQL Server. Service Broker is Server
broker used both for applications that use a single SQL 2012+
endpoints Server instance and applications that distribute
should be work across multiple instances. Service Broker
removed endpoints provide options for transport security
and message forwarding. This rule enumerates
all the service broker endpoints. Remove those
that are not used.
VA1071 'Scan for Medium When 'Scan for startup procs' is enabled SQL SQL
startup Server scans for and runs all automatically run Server
stored stored procedures defined on the server. If this 2012+
procedures' option is enabled SQL Server scans for and runs
option all automatically run stored procedures defined
should be on the server. This rule checks that this option is
disabled disabled.
VA1092 SQL Server Low SQL Server uses the SQL Server Browser service SQL
instance to enumerate instances of the Database Engine Server
shouldn't be installed on the computer. This enables client 2012+
advertised by applications to browse for a server and helps
the SQL clients distinguish between multiple instances
Server of the Database Engine on the same computer.
Browser This rule checks that the SQL instance is hidden.
service
VA1143 'dbo' user Medium The 'dbo' or database owner is a user account SQL
should not that has implied permissions to perform all Server
be used for activities in the database. Members of the 2012+
Azure
Synapse
VA1144 Model Medium The Model database is used as the template for SQL
database all databases created on the instance of SQL Server
should only Server. Modifications made to the model 2012+
VA1230 Filestream High FILESTREAM integrates the SQL Server Database SQL
should be Engine with an NTFS file system by storing Server
disabled varbinary (max) binary large object (BLOB) data 2012+
as files on the file system. Transact-SQL
statements can insert, update, query, search,
and back up FILESTREAM data. Enabling
Filestream on SQL server exposes additional
NTFS streaming API, which increases its attack
surface and makes it prone to malicious attacks.
This rule checks that Filestream is disabled.
XPs' should
be disabled SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity
VA1244 Orphaned Medium A database user that exists on a database but SQL
users should has no corresponding login in the master Server
be removed database or as an external resource (for 2012+
VA1245 The dbo High There is redundant information about the dbo SQL
information identity for any database: metadata stored in Server
should be the database itself and metadata stored in 2012+
VA1247 There should High When SQL Server has been configured to 'scan SQL
be no SPs for startup procs' the server will scan master DB Server
marked as for stored procedures marked as auto-start. This 2012+
auto-start rule checks that there are no SPs marked as
auto-start.
VA1256 User CLR High CLR assemblies can be used to execute arbitrary SQL
assemblies code on SQL Server process. This rule checks Server
should not that there are no user-defined CLR assemblies 2012+
VA1278 Create a Medium The SQL Server Extensible Key Management SQL
baseline of (EKM) enables third-party EKM / Hardware Server
External Key Security Modules (HSM) vendors to register 2012+
VA2062 Database- High The Azure SQL Database-level firewall helps SQL
level firewall protect your data by preventing all access to Database
VA2063 Server-level High The Azure SQL server-level firewall helps protect SQL
firewall rules your server by preventing all access to your Database
VA2064 Database- High The Azure SQL Database-level firewall helps SQL
level firewall protect your data by preventing all access to Database
VA2065 Server-level High The Azure SQL server-level firewall helps protect SQL
firewall rules your data by preventing all access to your Database
VA2111 Sample Low Microsoft SQL Server comes shipped with SQL
databases several sample databases. This rule checks Server
should be whether the sample databases have been 2012+
removed removed.
SQL
Managed
Instance
Rule ID Rule Title Rule Rule Description Platform
Severity
VA2120 Features that High SQL Server is capable of providing a wide range SQL
may affect of features and services. Some of the features Server
security and services provided by default may not be 2012+
VA2121 'OLE High SQL Server is capable of providing a wide range SQL
Automation of features and services. Some of the features Server
Procedures' and services, provided by default, may not be 2012+
VA2122 'User Medium SQL Server is capable of providing a wide range SQL
Options' of features and services. Some of the features Server
feature and services provided by default may not be 2012+
VA2126 Extensibility- Medium SQL Server provides a wide range of features SQL
features that and services. Some of the features and services, Server
may affect provided by default, may not be necessary, and 2016+
security enabling them could adversely affect the
should be security of the system. This rule checks that
disabled if configurations that allow extraction of data to
not needed an external data source and the execution of
scripts with certain remote language extensions
are disabled.
Removed rules
Rule ID Rule Title
VA1069 Permissions to select from system tables and views should be revoked from non-
sysadmins
VA1090 Ensure all Government Off The Shelf (GOTS) and Custom Stored Procedures are
encrypted
VA1229 Filestream setting in registry and in SQL Server configuration should match
VA1252 List of events being audited and centrally managed via server audit specifications.
VA1253 List of DB-scoped events being audited and centrally managed via server audit
specifications
VA2000 Minimal set of principals should be granted high impact database-scoped permissions
Rule ID Rule Title
VA2001 Minimal set of principals should be granted high impact database-scoped permissions
on objects or columns
VA2002 Minimal set of principals should be granted high impact database-scoped permissions
on various securables
VA2040 Minimal set of principals should be granted low impact database-scoped permissions
VA2041 Minimal set of principals should be granted low impact database-scoped permissions
on objects or columns
VA2042 Minimal set of principals should be granted low impact database-scoped permissions
on schema
VA2100 Minimal set of principals should be granted high impact server-scoped permissions
VA2101 Minimal set of principals should be granted medium impact server-scoped permissions
VA2102 Minimal set of principals should be granted low impact server-scoped permissions
VA2104 Execute permissions on extended stored procedures should be revoked from PUBLIC
Rule ID Rule Title
VA2112 Permissions from PUBLIC for Data Transformation Services (DTS) should be revoked
VA2115 Minimal set of principals should be members of medium impact fixed server roles
Next steps
Vulnerability assessment
SQL vulnerability assessment rules changelog
SQL vulnerability assessment rules
changelog
Article • 12/29/2022
This article details the changes made to the SQL vulnerability assessment service rules.
Rules that are updated, removed, or added will be outlined below. For an updated list of
SQL vulnerability assessment rules, see SQL vulnerability assessment rules.
June 2022
Rule ID Rule Title Change details
VA1047 Password expiration check should be enabled for all SQL logins Logic change
January 2022
Rule ID Rule Title Change
details
VA1054 Minimal set of principals should be members of fixed high impact Logic change
database roles
VA1220 Database communication using TDS should be protected through TLS Logic change
VA2120 Features that may affect security should be disabled Logic change
June 2021
Rule ID Rule Title Change
details
VA1220 Database communication using TDS should be protected through TLS Logic change
Rule ID Rule Title Change
details
VA2108 Minimal set of principals should be members of fixed high impact Logic change
database roles
December 2020
Rule ID Rule Title Change details
VA1017 Execute permissions on xp_cmdshell from all users (except dbo) Title and
should be revoked description
change
VA1042 Database ownership chaining should be disabled for all databases Description
except for master , msdb , and tempdb change
VA1044 Remote Admin Connections should be disabled unless specifically Title and
required description
change
VA1047 Password expiration check should be enabled for all SQL logins Title and
description
change
VA1053 Account with default name 'sa' should be renamed or disabled Description
change
VA1067 Database Mail XPs should be disabled when it is not in use Title and
description
change
VA1069 Permissions to select from system tables and views should be Removed rule
revoked from non-sysadmins
VA1090 Ensure all Government Off The Shelf (GOTS) and Custom Stored Removed rule
Procedures are encrypted
VA1091 Auditing of both successful and failed login attempts (default trace) Description
should be enabled when 'Login auditing' is set up to track logins change
Rule ID Rule Title Change details
VA1098 Any Existing SSB or Mirroring endpoint should require AES Logic change
connection
VA1229 Filestream setting in registry and in SQL Server configuration Removed rule
should match
VA1252 List of events being audited and centrally managed via server audit Removed rule
specifications.
VA1253 List of DB-scoped events being audited and centrally managed via Removed rule
server audit specifications.
VA1263 List all the active audits in the system Removed rule
VA1264 Auditing of both successful and failed login attempts should be Description
enabled change
VA1266 The 'MUST_CHANGE' option should be set on all SQL logins Removed rule
VA1281 All memberships for user-defined roles should be intended Logic change
VA2062 Database-level firewall rules should not grant excessive access Description
change
VA2063 Server-level firewall rules should not grant excessive access Description
change
VA2100 Minimal set of principals should be granted high impact server- Removed rule
scoped permissions
VA2101 Minimal set of principals should be granted medium impact server- Removed rule
scoped permissions
VA2102 Minimal set of principals should be granted low impact server- Removed rule
scoped permissions
VA2108 Minimal set of principals should be members of fixed high impact Logic change
database roles
VA2112 Permissions from PUBLIC for Data Transformation Services (DTS) Removed rule
should be revoked
VA2113 Data Transformation Services (DTS) permissions should only be Description and
granted to SSIS roles logic change
VA2114 Minimal set of principals should be members of high impact fixed Logic change
server roles
VA2115 Minimal set of principals should be members of medium impact Removed rule
fixed server roles
VA2120 Features that may affect security should be disabled Logic change
VA2126 Features that may affect security should be disabled Title, description,
and logic change
VA2130 Track all users with access to the database Description and
logic change
Next steps
SQL vulnerability assessment rules
SQL vulnerability assessment overview
Store vulnerability assessment scan results in a storage account accessible behind
firewalls and VNets
Optimized locking
Article • 05/03/2023
Applies to:
Azure SQL Database
This article introduces the optimized locking feature, a new SQL Server Database Engine
capability that offers an improved transaction locking mechanism that reduces lock
memory consumption and blocking for concurrent transactions.
For example:
Without optimized locking, updating 1 million rows in a table may require 1 million
exclusive (X) row locks held until the end of the transaction.
With optimized locking, updating 1 million rows in a table may require 1 million X
row locks but each lock is released as soon as each row is updated, and only one
TID lock will be held until the end of the transaction.
This article covers these two core concepts of optimized locking in detail.
Availability
Currently, optimized locking is available in Azure SQL Database only. For more
information, see Where is optimized locking currently available?
Is optimized locking enabled?
Optimized locking is enabled per user database. Connect to your database, then use the
following query to check if optimized locking is enabled on your database:
SQL
If you are not connected to the database specified in DATABASEPROPERTYEX , the result will
be NULL . You should receive 0 (optimized locking is disabled) or 1 (enabled).
Both ADR and RCSI are enabled by default in Azure SQL Database. To verify that these
options are enabled for your current database, use the following T-SQL query:
SQL
SELECT name
, is_read_committed_snapshot_on
, is_accelerated_database_recovery_on
FROM sys.databases
Locking overview
This is a short summary of the behavior when optimized locking is not enabled. For
more information, review the Transaction locking and row versioning guide.
In the Database Engine, locking is a mechanism that prevents multiple transactions from
updating the same data simultaneously, in order to protect data integrity and
consistency.
When a transaction needs to modify data, it can request a lock on the data. The lock is
granted if no other conflicting locks are held on the data, and the transaction can
proceed with the modification. If another conflicting lock is held on the data, the
transaction must wait for the lock to be released before it can proceed.
When multiple transactions are allowed to access the same data concurrently, the
Database Engine must resolve potentially complex conflicts with concurrent reads and
writes. Locking is one of the mechanisms by which the database engine can provide the
semantics for the ANSI SQL transaction isolation levels. Although locking in databases is
essential, reduced concurrency, deadlocks, complexity, and lock overhead can impact
performance and scalability.
With TID locking, instead of taking the lock on the key of the row, a lock is taken on the
TID of the row. The modifying transaction will hold an X lock on its TID. Other
transactions will acquire an S lock on the TID to check if the first transaction is still
active. With TID locking, page and row locks continue to be taken for updates, but each
page and row lock is released as soon as each row is updated. The only lock held until
end of transaction is the X lock on the TID resource, replacing page and row (key) locks
as demonstrated in the next demo. (Other standard database and object locks are not
affected by optimized locking.)
Optimized locking helps to reduce lock memory as very few locks are held for large
transactions. In addition, optimized locking also avoids lock escalations. This allows
other concurrent transactions to access the table.
Consider the following T-SQL sample scenario that looks for locks on the user's current
session:
SQL
CREATE TABLE t0
,b int null);
GO
BEGIN TRAN
UPDATE t0
SET b=b+10;
COMMIT TRAN
GO
The same query without the benefit of optimized locking creates four locks:
Without optimized locking, predicates from queries are checked row by row in a scan by
first taking an update (U) row lock. If the predicate is satisfied, an X row lock is taken
before updating the row.
With optimized locking, and when the read committed snapshot isolation level (RCSI) is
enabled, predicates are applied on latest committed version without taking any row
locks. If the predicate does not satisfy, the query moves to the next row in the scan. If
the predicate is satisfied, an X row lock is taken to actually update the row. The X row
lock is released as soon as the row update is complete, before the end of the
transaction.
Since predicate evaluation is performed without acquiring any locks, concurrent queries
modifying different rows will not block each other.
Example:
SQL
CREATE TABLE t1
,b int null);
GO
Session 1 Session 2
Session 1 Session 2
BEGIN TRAN
UPDATE t1
SET b=b+10
WHERE a=1;
BEGIN TRAN
UPDATE t1
SET b=b+10
WHERE a=2;
COMMIT TRAN
COMMIT TRAN
Note that the behavior of blocking changes with optimized locking in the previous
example. Without optimized locking, Session 2 will be blocked.
However, with optimized locking, Session 2 will not be blocked as the latest committed
version of row 1 contains a=1, which does not satisfy the predicate of Session 2.
If the predicate is satisfied, we wait for any active transaction on the row to finish. If we
had to wait for the S TID lock, the row might have changed, and the latest committed
version might have changed. In that case, instead of aborting the transaction due to an
update conflict, the Database Engine will retry the predicate evaluation on the same row.
If the predicate qualifies upon retry, the row will be updated.
SQL
CREATE TABLE t2
,b int null);
GO
Session 1 Session 2
BEGIN TRAN
UPDATE t2
SET b=b+10
WHERE a=1;
Session 1 Session 2
BEGIN TRAN
UPDATE t2
SET b=b+10
WHERE a=1;
COMMIT TRAN
COMMIT TRAN
SQL
GO
Session 1 Session 2
BEGIN TRAN T1
UPDATE t1
SET b=2
WHERE a=1;
BEGIN TRAN T2
UPDATE t1
SET b=3
WHERE b=2;
COMMIT TRAN
COMMIT TRAN
Let's evaluate the outcome of the above scenario with and without lock after
qualification (LAQ), an integral part of optimized locking.
Without LAQ
Without LAQ, transaction T2 will be blocked and wait for the transaction T1 to complete.
After both transactions commit, table t1 will contain the following rows:
a | b
1 | 3
With LAQ
With LAQ, transaction T2 will use the latest committed version of the row b ( b =1 in the
version store) to evaluate its predicate ( b =2). This row does not qualify; hence it is
skipped and T2 moves to the next row without having been blocked by transaction T1.
In this example, LAQ removes blocking but leads to different results.
After both transactions commit, table t1 will contain the following rows:
a | b
1 | 2
) Important
Even without LAQ, applications should not assume that SQL Server (under
versioning isolation levels) will guarantee strict ordering, without using locking
hints. Our general recommendation for customers on concurrent systems under
RCSI with workloads that rely on strict execution order of transactions (as shown in
the previous exercise), is to use stricter isolation levels.
In Azure SQL Database, RCSI is enabled by default and read committed is the default
isolation level. With RCSI enabled and when using read committed isolation level,
readers don't block writers and writers don't block readers. Readers read a version of the
row from the snapshot taken at the start of the query. With LAQ, writers will qualify rows
per the predicate based on the latest committed version of the row without acquiring U
locks. With LAQ, a query will wait only if the row qualifies and there is an active write
transaction on that row. Qualifying based on the latest committed version and locking
only the qualified rows reduces blocking and increases concurrency.
In addition to reduced blocking, the lock memory required will be reduced. This is
because readers don't take any locks, and writers take only short duration locks, instead
of locks that expire at the end of the transaction. When using stricter isolation levels like
repeatable read or serializable, the Database Engine is forced to hold row and page
locks until the end of the transaction, for both readers and writers, resulting in increased
blocking and lock memory.
With optimized locking, there are no restrictions on existing queries and queries do not
need to be rewritten. Queries that are not using hints will benefit most from optimized
locking.
A table hint on one table in a query will not disable optimized locking for other tables in
the same query. Further, optimized locking only affects the locking behavior of tables
being updated by an UPDATE statement. For example:
SQL
CREATE TABLE t3
CREATE TABLE t4
GO
GO
FROM t3
In the previous query example, only table t4 will be affected by the locking hint, while
t3 can still benefit from optimized locking.
SQL
In the previous query example, only table t3 will use the repeatable read isolation level,
and will hold locks until the end of the transaction. Other updates to t3 can still benefit
from optimized locking. The same applies to the HOLDLOCK hint.
Use the following steps to create a new support request from the Azure portal for Azure
SQL Database.
5. For Subscription, Service, and Resource, select the desired SQL Database.
9. In Additional details, provide as much information as possible for why you would
like to disable optimized locking. We are interested to review the reasons and use
cases for disabling optimized locking with you.
Next steps
Transaction locking and row versioning guide
Read committed snapshot isolation (RCSI)
sys.dm_tran_locks (Transact-SQL)
Accelerated database recovery in Azure SQL
Accelerated database recovery
Tutorial: Migrate SQL Server to an Azure
SQL Managed Instance offline using
DMS (classic)
Article • 03/08/2023
) Important
7 Note
This tutorial uses an older version of the Azure Database Migration Service. For
improved functionality and supportability, consider migrating to Azure SQL
Managed Instance by using the Azure SQL migration extension for Azure Data
Studio.
You can use Azure Database Migration Service to migrate the databases from a SQL
Server instance to an Azure SQL Managed Instance. For additional methods that may
require some manual effort, see the article SQL Server to Azure SQL Managed Instance.
) Important
For offline migrations from SQL Server to SQL Managed Instance, Azure Database
Migration Service can create the backup files for you. Alternately, you can provide
the latest full database backup in the SMB network share that the service will use to
migrate your databases. Each backup can be written to either a separate backup file
or multiple backup files. However, appending multiple backups into a single backup
media is not supported. Note that you can use compressed backups as well, to
reduce the likelihood of experiencing potential issues with migrating large backups.
Tip
In Azure Database Migration Service, you can migrate your databases offline or
while they are online. In an offline migration, application downtime starts when the
migration starts. To limit downtime to the time it takes you to cut over to the new
environment after the migration, use an online migration. We recommend that you
test an offline migration to determine whether the downtime is acceptable. If the
expected downtime isn't acceptable, do an online migration.
This article describes an offline migration from SQL Server to a SQL Managed Instance.
For an online migration, see Migrate SQL Server to an SQL Managed Instance online
using DMS.
Prerequisites
To complete this tutorial, you need to:
Enable the TCP/IP protocol, which is disabled by default during SQL Server Express
installation, by following the instructions in the article Enable or Disable a Server
Network Protocol.
Create a Microsoft Azure Virtual Network for Azure Database Migration Service by
using the Azure Resource Manager deployment model, which provides site-to-site
connectivity to your on-premises source servers by using either ExpressRoute or
VPN. Learn network topologies for SQL Managed Instance migrations using Azure
Database Migration Service. For more information about creating a virtual network,
see the Virtual Network Documentation, and especially the quickstart articles with
step-by-step details.
7 Note
During virtual network setup, if you use ExpressRoute with network peering to
Microsoft, add the following service endpoints to the subnet in which the
service will be provisioned:
Target database endpoint (for example, SQL endpoint, Azure Cosmos DB
endpoint, and so on)
Storage endpoint
Service bus endpoint
Ensure that your virtual network Network Security Group rules don't block the
outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For
more detail on virtual network NSG traffic filtering, see the article Filter network
traffic with network security groups.
Open your Windows Firewall to allow Azure Database Migration Service to access
the source SQL Server, which by default is TCP port 1433. If your default instance is
listening on some other port, add that to the firewall.
If you're running multiple named SQL Server instances using dynamic ports, you
may wish to enable the SQL Browser Service and allow access to UDP port 1434
through your firewalls so that Azure Database Migration Service can connect to a
named instance on your source server.
If you're using a firewall appliance in front of your source databases, you may need
to add firewall rules to allow Azure Database Migration Service to access the
source database(s) for migration, as well as files via SMB port 445.
Create a SQL Managed Instance by following the detail in the article Create a SQL
Managed Instance in the Azure portal.
Ensure that the logins used to connect the source SQL Server and target SQL
Managed Instance are members of the sysadmin server role.
7 Note
After restarting the service, Windows user/group logins appear in the list of
logins available for migration. For any Windows user/group logins you
migrate, you are prompted to provide the associated domain name. Service
user accounts (account with domain name NT AUTHORITY) and virtual user
accounts (account name with domain name NT SERVICE) are not supported.
Create a network share that Azure Database Migration Service can use to back up
the source database.
Ensure that the service account running the source SQL Server instance has write
privileges on the network share that you created and that the computer account
for the source server has read/write access to the same share.
Make a note of a Windows user (and password) that has full control privilege on
the network share that you previously created. Azure Database Migration Service
impersonates the user credential to upload the backup files to Azure Storage
container for restore operation.
Create a blob container and retrieve its SAS URI by using the steps in the article
Manage Azure Blob Storage resources with Storage Explorer, be sure to select all
permissions (Read, Write, Delete, List) on the policy window while creating the SAS
URI. This detail provides Azure Database Migration Service with access to your
storage account container for uploading the backup files used for migrating
databases to SQL Managed Instance.
7 Note
Azure Database Migration Service does not support using an account level
SAS token when configuring the Storage Account settings during the
Configure Migration Settings step.
Ensure both the Azure Database Migration Service IP address and the Azure SQL
Managed Instance subnet can communicate with the blob container.
2. Select the subscription in which you want to create the instance of Azure Database
Migration Service, and then select Resource providers.
Select an existing virtual network or create a new one. The virtual network
provides Azure Database Migration Service with access to the source server
and the target instance. For more information about how to create a virtual
network in the Azure portal, see the article Create a virtual network using the
Azure portal.
Select Review + Create to review the details and then select Create to create
the service.
After a few moments, your instance of the Azure Database Migration service
is created and ready to use:
7 Note
For additional detail, see the article Network topologies for Azure SQL Managed
Instance migrations using Azure Database Migration Service.
Create a migration project
After an instance of the service is created, locate it within the Azure portal, open it, and
then create a new migration project.
1. In the Azure portal menu, select All services. Search for and select Azure Database
Migration Services.
2. On the Azure Database Migration Services screen, select the Azure Database
Migration Service instance that you created.
4. On the New migration project screen, specify a name for the project, in the
Source server type text box, select SQL Server, in the Target server type text box,
select Azure SQL Database Managed Instance, and then for Choose type of
activity, select Offline data migration.
5. Select Create and run activity to create the project and run the migration activity.
Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server
instance name. You can also use the IP Address for situations in which DNS name
resolution isn't possible.
2. If you haven't installed a trusted certificate on your server, select the Trust server
certificate check box.
TLS connections that are encrypted using a self-signed certificate does not
provide strong security. They are susceptible to man-in-the-middle attacks.
You should not rely on TLS using self-signed certificates in a production
environment or on servers that are connected to the internet.
If you haven't already provisioned the SQL Managed Instance, select the link to
help you provision the instance. You can still continue with project creation and
then, when the SQL Managed Instance is ready, return to this specific project to
execute the migration.
2. Select Next: Select databases. On the Select databases screen, select the
AdventureWorks2016 database for migration.
) Important
If you use SQL Server Integration Services (SSIS), DMS does not currently
support migrating the catalog database for your SSIS projects/packages
(SSISDB) from SQL Server to SQL Managed Instance. However, you can
provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS
projects/packages to the destination SSISDB hosted by SQL Managed
Instance. For more information about migrating SSIS packages, see the article
Migrate SQL Server Integration Services packages to Azure.
Select logins
1. On the Select logins screen, select the logins that you want to migrate.
7 Note
Parameter Description
Choose Choose the option I will provide latest backup files when you already have
source full backup files available for DMS to use for database migration. Choose the
backup option I will let Azure Database Migration Service create backup files when
option you want DMS to take the source database full backup at first and use it for
migration.
Network The local SMB network share that Azure Database Migration Service can take
location the source database backups to. The service account running source SQL
share Server instance must have write privileges on this network share. Provide an
FQDN or IP addresses of the server in the network share, for example,
'\\servername.domainname.com\backupfolder' or '\\IP address\backupfolder'.
User name Make sure that the Windows user has full control privilege on the network
share that you provided above. Azure Database Migration Service will
impersonate the user credential to upload the backup files to Azure Storage
container for restore operation. If TDE-enabled databases are selected for
migration, the above windows user must be the built-in administrator account
and User Account Control must be disabled for Azure Database Migration
Service to upload and delete the certificates files.)
Storage The SAS URI that provides Azure Database Migration Service with access to
account your storage account container to which the service uploads the backup files
settings and that is used for migrating databases to SQL Managed Instance. Learn how
to get the SAS URI for blob container. This SAS URI must be for the blob
container, not for the storage account.
TDE If you're migrating the source databases with Transparent Data Encryption
Settings (TDE) enabled, you need to have write privileges on the target SQL Managed
Instance. Select the subscription in which the SQL Managed Instance
provisioned from the drop-down menu. Select the target Azure SQL Database
Managed Instance in the drop-down menu.
2. Select Next: Summary.
2. Review and verify the details associated with the migration project.
Run the migration
Select Start migration.
The migration activity window appears that displays the current migration status of
the databases and logins.
2. You can further expand the databases and logins categories to monitor the
migration status of the respective server objects.
3. After the migration completes, verify the target database on the SQL Managed
Instance environment.
Additional resources
For a tutorial showing you how to migrate a database to SQL Managed Instance
using the T-SQL RESTORE command, see Restore a backup to SQL Managed
Instance using the restore command.
For information about SQL Managed Instance, see What is SQL Managed Instance.
For information about connecting apps to SQL Managed Instance, see Connect
applications.
Quickstart: Run simple Python scripts
with SQL machine learning
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this quickstart, you'll run a set of simple Python scripts using SQL Server Machine
Learning Services, Azure SQL Managed Instance Machine Learning Services, or SQL
Server Big Data Clusters. You'll learn how to use the stored procedure
sp_execute_external_script to execute the script in a SQL Server instance.
Prerequisites
You need the following prerequisites to run this quickstart.
A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.
In the following steps, you'll run this example Python script in your database:
Python
a = 1
b = 2
c = a/b
d = a*b
print(c, d)
1. Open a new query window in Azure Data Studio connected to your SQL instance.
The script is passed through the @script argument. Everything inside the @script
argument must be valid Python code.
SQL
, @script = N'
a = 1
b = 2
c = a/b
d = a*b
print(c, d)
'
3. The correct result is calculated and the Python print function returns the result to
the Messages window.
Results
text
0.5 2
SQL
GO
Input Description
@script defines the commands passed to the Python runtime. Your entire Python script
must be enclosed in this argument, as Unicode text. You could also add the text
to a variable of type nvarchar and then call the variable
@input_data_1 data returned by the query, passed to the Python runtime, which returns the
data as a data frame
WITH RESULT clause defines the schema of the returned data table for SQL machine learning,
SETS adding "Hello World" as the column name, int for the data type
Hello World
For now, let's use the default input and output variables of sp_execute_external_script :
InputDataSet and OutputDataSet.
SQL
VALUES (1);
VALUES (10);
VALUES (100);
GO
SQL
SELECT *
FROM PythonTestData
Results
3. Run the following Python script. It retrieves the data from the table using the
SELECT statement, passes it through the Python runtime, and returns the data as a
data frame. The WITH RESULT SETS clause defines the schema of the returned data
table for SQL, adding the column name NewColName.
SQL
Results
4. Now change the names of the input and output variables. The default input and
output variable names are InputDataSet and OutputDataSet, the following script
changes the names to SQL_in and SQL_out:
SQL
, @input_data_1_name = N'SQL_in'
, @output_data_1_name = N'SQL_out'
Note that Python is case-sensitive. The input and output variables used in the
Python script (SQL_out, SQL_in) need to match the names defined with
@input_data_1_name and @output_data_1_name , including case.
Tip
Only one input dataset can be passed as a parameter, and you can return only
one dataset. However, you can call other datasets from inside your Python
code and you can return outputs of other types in addition to the dataset. You
can also add the OUTPUT keyword to any parameter to have it returned with
the results.
5. You can also generate values just using the Python script with no input data
( @input_data_1 is set to blank).
SQL
, @script = N'
import pandas as pd
OutputDataSet = pd.DataFrame(mytextvariable);
'
, @input_data_1 = N''
Results
Tip
Python uses leading spaces to group statements. So when the imbedded Python
script spans multiple lines, as in the preceding script, don't try to indent the Python
commands to be in line with the SQL commands. For example, this script will
produce an error:
SQL
EXECUTE sp_execute_external_script @language = N'Python'
, @script = N'
import pandas as pd
OutputDataSet = pd.DataFrame(mytextvariable);
'
, @input_data_1 = N''
SQL
, @script = N'
import sys
print(sys.version)
'
GO
The Python print function returns the version to the Messages window. In the example
output below, you can see that in this case, Python version 3.5.2 is installed.
Results
text
To see a list of which Python packages are installed, including version, run the following
script.
SQL
, @script = N'
import pkg_resources
import pandas
OutputDataSet = pandas.DataFrame(dists)
'
GO
Next steps
To learn how to use data structures when using Python in SQL machine learning, follow
this quickstart:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this quickstart, you'll learn how to use data structures and data types when using
Python in SQL Server Machine Learning Services, Azure SQL Managed Instance Machine
Learning Services, or on SQL Server Big Data Clusters. You'll learn about moving data
between Python and SQL Server, and the common issues that might occur.
SQL machine learning relies on the Python pandas package, which is great for working
with tabular data. However, you cannot pass a scalar from Python to your database and
expect it to just work. In this quickstart, you'll review some basic data structure
definitions, to prepare you for additional issues that you might run across when passing
tabular data between Python and the database.
How would you expose the single result of a calculation as a data frame, if a data.frame
requires a tabular structure? One answer is to represent the single scalar value as a
series, which is easily converted to a data frame.
7 Note
When returning dates, Python in SQL uses DATETIME which has a restricted date
range of 1753-01-01(-53690) through 9999-12-31(2958463).
Prerequisites
You need the following prerequisites to run this quickstart.
A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.
1. A series requires an index, which you can assign manually, as shown here, or
programmatically.
SQL
, @script = N'
a = 1
b = 2
c = a/b
print(c)
print(s)
'
Because the series hasn't been converted to a data.frame, the values are returned
in the Messages window, but you can see that the results are in a more tabular
format.
Results
text
0.5
dtype: float64
2. To increase the length of the series, you can add new values, using an array.
SQL
, @script = N'
a = 1
b = 2
c = a/b
d = a*b
s = pandas.Series([c,d])
print(s)
'
If you do not specify an index, an index is generated that has values starting with 0
and ending with the length of the array.
Results
text
0 0.5
1 2.0
dtype: float64
3. If you increase the number of index values, but don't add new data values, the
data values are repeated to fill the series.
SQL
, @script = N'
a = 1
b = 2
c = a/b
print(s)
'
Results
text
0.5
dtype: float64
SQL
, @script = N'
import pandas as pd
a = 1
b = 2
c = a/b
d = a*b
s = pandas.Series([c,d])
print(s)
df = pd.DataFrame(s)
OutputDataSet = df
'
The result is shown below. Even if you use the index to get specific values from the
data.frame, the index values aren't part of the output.
Results
ResultValue
0.5
1. The following example gets a value from the series using an integer index.
SQL
, @script = N'
import pandas as pd
a = 1
b = 2
c = a/b
d = a*b
s = pandas.Series([c,d])
print(s)
df = pd.DataFrame(s, index=[1])
OutputDataSet = df
'
Results
ResultValue
2.0
Remember that the auto-generated index starts at 0. Try using an out of range
index value and see what happens.
2. Now get a single value from the other data frame using a string index.
SQL
, @script = N'
import pandas as pd
a = 1
b = 2
c = a/b
print(s)
OutputDataSet = df
'
Results
ResultValue
0.5
If you try to use a numeric index to get a value from this series, you get an error.
Next steps
To learn about writing advanced Python functions with SQL machine learning, follow this
quickstart:
Write advanced Python functions
Quickstart: Python functions with SQL
machine learning
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this quickstart, you'll learn how to use Python mathematical and utility functions with
SQL Server Machine Learning Services, Azure SQL Managed Instance Machine Learning
Services, or SQL Server Big Data Clusters. Statistical functions are often complicated to
implement in T-SQL, but can be done in Python with only a few lines of code.
Prerequisites
You need the following prerequisites to run this quickstart.
A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.
For example, the following Python code returns 100 numbers on a mean of 50, given a
standard deviation of 3.
Python
To call this line of Python from T-SQL, add the Python function in the Python script
parameter of sp_execute_external_script . The output expects a data frame, so use
pandas to convert it.
SQL
, @script = N'
import numpy
import pandas
'
What if you'd like to make it easier to generate a different set of random numbers? You
define a stored procedure that gets the arguments from the user, then pass those
arguments into the Python script as variables.
SQL
@param1 INT
, @param2 INT
, @param3 INT
AS
, @script = N'
import numpy
import pandas
OutputDataSet = pandas.DataFrame(numpy.random.normal(size=mynumbers,
loc=mymean, scale=mysd));
'
, @mynumbers = @param1
, @mymean = @param2
, @mysd = @param3
The first line defines each of the SQL input parameters that are required when the
stored procedure is executed.
The line beginning with @params defines all variables used by the Python code, and
the corresponding SQL data types.
The lines that immediately follow map the SQL parameter names to the
corresponding Python variable names.
Now that you've wrapped the Python function in a stored procedure, you can easily call
the function and pass in different values, like this:
SQL
For example, you might use system timing functions in the time package to measure
the amount of time used by Python processes and analyze performance issues.
SQL
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
import time
start_time = time.time()
'
Next steps
To create a machine learning model using Python with SQL machine learning, follow this
quickstart:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this quickstart, you'll create and train a predictive model using Python. You'll save the
model to a table in your SQL Server instance, and then use the model to predict values
from new data using SQL Server Machine Learning Services, Azure SQL Managed
Instance Machine Learning Services, or SQL Server Big Data Clusters.
You'll create and execute two stored procedures running in SQL. The first one uses the
classic Iris flower data set and generates a Naïve Bayes model to predict an Iris species
based on flower characteristics. The second procedure is for scoring - it calls the model
generated in the first procedure to output a set of predictions based on new data. By
placing Python code in a SQL stored procedure, operations are contained in SQL, are
reusable, and can be called by other stored procedures and client applications.
Prerequisites
You need the following prerequisites to run this quickstart.
A tool for running SQL queries that contain Python scripts. This quickstart uses
Azure Data Studio.
The sample data used in this exercise is the Iris sample data. Follow the instructions
in Iris demo data to create the sample database irissql.
1. Open Azure Data Studio, connect to your SQL instance, and open a new query
window.
SQL
USE irissql
GO
Inputs needed by your Python code are passed as input parameters on this stored
procedure. Output will be a trained model, based on the Python scikit-learn library
for the machine learning algorithm.
This code uses pickle to serialize the model. The model will be trained using
data from columns 0 through 4 from the iris_data table.
The parameters you see in the second part of the procedure articulate data inputs
and model outputs. As much as possible, you want the Python code running in a
stored procedure to have clearly defined inputs and outputs that map to stored
procedure inputs and outputs passed in at run time.
SQL
AS
BEGIN
, @script = N'
import pickle
GNB = GaussianNB()
trained_model = pickle.dumps(GNB.fit(iris_data[["Sepal.Length",
"Sepal.Width", "Petal.Length", "Petal.Width"]],
iris_data[["SpeciesId"]].values.ravel()))
'
, @input_data_1_name = N'iris_data'
END;
GO
If the T-SQL script from the previous step ran without error, a new stored
procedure called generate_iris_model is created and added to the irissql database.
You can find stored procedures in the Azure Data Studio Object Explorer, under
Programmability.
Models that are stored for reuse in your database are serialized as a byte stream and
stored in a VARBINARY(MAX) column in a database table. Once the model is created,
trained, serialized, and saved to a database, it can be called by other procedures or by
the PREDICT T-SQL function in scoring workloads.
1. Run the following script to execute the procedure. The specific statement for
executing a stored procedure is EXECUTE on the fourth line.
This particular script deletes an existing model of the same name ("Naive Bayes")
to make room for new ones created by rerunning the same procedure. Without
model deletion, an error occurs stating the object already exists. The model is
stored in a table called iris_models, provisioned when you created the irissql
database.
SQL
GO
SQL
Results
model_name model
1. Run the following code to create the stored procedure that performs scoring. At
run time, this procedure will load a binary model, use columns [1,2,3,4] as inputs,
and specify columns [0,5,6] as output.
SQL
AS
BEGIN
SELECT model
FROM iris_models
);
, @script = N'
import pickle
irismodel = pickle.loads(nb_model)
species_pred = irismodel.predict(iris_data[["Sepal.Length",
"Sepal.Width", "Petal.Length", "Petal.Width"]])
iris_data["PredictedSpecies"] = species_pred
OutputDataSet = iris_data[["id","SpeciesId","PredictedSpecies"]]
print(OutputDataSet)
'
, @input_data_1_name = N'iris_data'
, @nb_model = @nb_model
"id" INT
, "SpeciesId" INT
, "SpeciesId.Predicted" INT
));
END;
GO
2. Execute the stored procedure, giving the model name "Naive Bayes" so that the
procedure knows which model to use.
SQL
GO
When you run the stored procedure, it returns a Python data.frame. This line of T-
SQL specifies the schema for the returned results: WITH RESULT SETS ( ("id" int,
"SpeciesId" int, "SpeciesId.Predicted" int)); . You can insert the results into a
The results are 150 predictions about species using floral characteristics as inputs.
For the majority of the observations, the predicted species matches the actual
species.
This example has been made simple by using the Python iris dataset for both
training and scoring. A more typical approach would involve running a SQL query
to get the new data, and passing that into Python as InputDataSet .
Conclusion
In this exercise, you learned how to create stored procedures dedicated to different
tasks, where each stored procedure used the system stored procedure
sp_execute_external_script to start a Python process. Inputs to the Python process are
passed to sp_execute_external as parameters. Both the Python script itself and data
variables in a database are passed as inputs.
Generally, you should only plan on using Azure Data Studio with polished Python code,
or simple Python code that returns row-based output. As a tool, Azure Data Studio
supports query languages like T-SQL and returns flattened rowsets. If your code
generates visual output like a scatterplot or histogram, you need a separate tool or end-
user application that can render the image outside of the stored procedure.
For some Python developers who are used to writing all-inclusive script handling a
range of operations, organizing tasks into separate procedures might seem unnecessary.
But training and scoring have different use cases. By separating them, you can put each
task on a different schedule and scope permissions to each operation.
A final benefit is that the processes can be modified using parameters. In this exercise,
Python code that created the model (named "Naive Bayes" in this example) was passed
as an input to a second stored procedure calling the model in a scoring process. This
exercise only uses one model, but you can imagine how parameterizing the model in a
scoring task would make that script more useful.
Next steps
For more information on tutorials for Python with SQL machine learning, see:
Python tutorials
Deploy and make predictions with an
ONNX model and SQL machine learning
Article • 01/04/2023
In this quickstart, you'll learn how to train a model, convert it to ONNX, deploy it to
Azure SQL Edge, and then run native PREDICT on data using the uploaded ONNX
model.
This quickstart is based on scikit-learn and uses the Boston Housing dataset .
For each script part below, enter it in a cell in the Azure Data Studio notebook and
run the cell.
Train a pipeline
Split the dataset to use features to predict the median value of a house.
Python
import numpy as np
import onnxmltools
import onnxruntime as rt
import pandas as pd
import skl2onnx
import sklearn
import sklearn.datasets
boston = load_boston()
boston
df = pd.DataFrame(data=np.c_[boston['data'], boston['target']],
columns=boston['feature_names'].tolist() + ['MEDV'])
target_column = 'MEDV'
y_train = pd.DataFrame(df.iloc[:,df.columns.tolist().index(target_column)])
print(x_train.head())
print(y_train.head())
Output:
text
0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0
1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0
2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0
3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0
4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0
PTRATIO B LSTAT
0 24.0
1 21.6
2 34.7
3 33.4
4 36.2
Create a pipeline to train the LinearRegression model. You can also use other regression
models.
Python
preprocessor = ColumnTransformer(
transformers=[
model = Pipeline(
steps=[
('preprocessor', preprocessor),
('regressor', LinearRegression())])
model.fit(x_train, y_train)
Check the accuracy of the model and then calculate the R2 score and mean squared
error.
Python
y_pred = model.predict(x_train)
Output:
text
*** Scikit-learn r2 score: 0.7406426641094094
Python
inputs = []
continue
if v == 'int64':
t = Int64TensorType([nrows, 1])
elif v == 'float32':
t = FloatTensorType([nrows, 1])
elif v == 'float64':
t = DoubleTensorType([nrows, 1])
else:
inputs.append((k, t))
return inputs
Using skl2onnx , convert the LinearRegression model to the ONNX format and save it
locally.
Python
onnx_model_path = 'boston1.model.onnx'
onnxmltools.utils.save_model(onnx_model, onnx_model_path)
7 Note
You may need to set the target_opset parameter for the skl2onnx.convert_sklearn
function if there is a mismatch between ONNX runtime version in SQL Edge and
skl2onnx packge. For more information, see the SQL Edge Release notes to get the
ONNX runtime version corresponding for the release, and pick the target_opset
for the ONNX runtime based on the ONNX backward compatibility matrix .
7 Note
ONNX Runtime uses floats instead of doubles so small discrepancies are possible.
Python
import onnxruntime as rt
sess = rt.InferenceSession(onnx_model_path)
for i in range(len(x_train)):
inputs = {}
for j in range(len(x_train.columns)):
inputs[x_train.columns[j]] = np.full(shape=(1,1),
fill_value=x_train.iloc[i,j])
y_pred[i] = sess_pred[0][0][0]
print()
print()
Output:
text
*** Onnx r2 score: 0.7406426691136831
Python
import pyodbc
cursor = conn.cursor()
database = 'onnx'
cursor.execute(query)
conn.commit()
cursor.execute(query)
conn.commit()
cursor = conn.cursor()
table_name = 'models'
cursor.execute(query)
conn.commit()
f'[description] varchar(1000))'
cursor.execute(query)
conn.commit()
model_bits = onnx_model.SerializeToString()
insert_params = (pyodbc.Binary(model_bits))
cursor.execute(query, insert_params)
conn.commit()
First, create two tables, features and target, to store subsets of the Boston housing
dataset.
Features contains all data being used to predict the target, median value.
Target contains the median value for each record in the dataset.
Python
import sqlalchemy
import urllib
conn = pyodbc.connect(db_connection_string)
cursor = conn.cursor()
features_table_name = 'features'
cursor.execute(query)
conn.commit()
query = \
cursor.execute(query)
conn.commit()
target_table_name = 'target'
query = \
print(x_train.head())
print(y_train.head())
Finally, use sqlalchemy to insert the x_train and y_train pandas dataframes into the
tables features and target , respectively.
Python
sql_engine = sqlalchemy.create_engine(db_connection_string)
7 Note
SQL
USE onnx
SELECT DATA
FROM dbo.models
WHERE id = 1
);
WITH predict_input
AS (
, CRIM
, ZN
, INDUS
, CHAS
, NOX
, RM
, AGE
, DIS
, RAD
, TAX
, PTRATIO
, B
, LSTAT
FROM [dbo].[features]
SELECT predict_input.id
, p.variable1 AS MEDV
Next Steps
Machine Learning and AI with ONNX in SQL Edge
Quickstart: Run simple R scripts with
SQL machine learning
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this quickstart, you'll run a set of simple R scripts using Azure SQL Managed Instance
Machine Learning Services. You'll learn how to use the stored procedure
sp_execute_external_script to execute the script in your database.
Prerequisites
You need the following prerequisites to run this quickstart.
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.
a <- 1
b <- 2
c <- a/b
d <- a*b
print(c(c, d))
The script is passed through the @script argument. Everything inside the @script
argument must be valid R code.
SQL
, @script = N'
a <- 1
b <- 2
c <- a/b
d <- a*b
print(c(c, d))
'
3. The correct result is calculated and the R print function returns the result to the
Messages window.
Results
text
0.5 2
SQL
, @script = N'OutputDataSet<-InputDataSet'
GO
Input Description
@script defines the commands passed to the R runtime. Your entire R script must be
enclosed in this argument, as Unicode text. You could also add the text to a
variable of type nvarchar and then call the variable
Input Description
@input_data_1 data returned by the query, passed to the R runtime, which returns the data as a
data frame
WITH RESULT clause defines the schema of the returned data table, adding "Hello World" as
SETS the column name, int for the data type
Hello World
For now, let's use the default input and output variables of sp_execute_external_script :
InputDataSet and OutputDataSet.
SQL
VALUES (1);
VALUES (10);
VALUES (100);
GO
SQL
SELECT *
FROM RTestData
Results
3. Run the following R script. It retrieves the data from the table using the SELECT
statement, passes it through the R runtime, and returns the data as a data frame.
The WITH RESULT SETS clause defines the schema of the returned data table for
SQL, adding the column name NewColName.
SQL
Results
4. Now let's change the names of the input and output variables. The default input
and output variable names are InputDataSet and OutputDataSet, this script
changes the names to SQL_in and SQL_out:
SQL
, @input_data_1_name = N'SQL_in'
, @output_data_1_name = N'SQL_out'
Note that R is case-sensitive. The input and output variables used in the R script
(SQL_out, SQL_in) need to match the names defined with @input_data_1_name and
@output_data_1_name , including case.
Tip
Only one input dataset can be passed as a parameter, and you can return only
one dataset. However, you can call other datasets from inside your R code
and you can return outputs of other types in addition to the dataset. You can
also add the OUTPUT keyword to any parameter to have it returned with the
results.
5. You also can generate values just using the R script with no input data
( @input_data_1 is set to blank).
SQL
, @script = N'
'
, @input_data_1 = N''
Results
Check R version
If you would like to see which version of R is installed, run the following script.
SQL
, @script = N'print(version)';
GO
The R print function returns the version to the Messages window. In the example
output below, you can see that in this case, R version 3.4.4 is installed.
Results
text
STDOUT message(s) from external script:
platform x86_64-w64-mingw32
arch x86_64
os mingw32
status
major 3
minor 4.4
year 2018
month 03
day 15
language R
List R packages
To see a list of which R packages are installed, including version, dependencies, license,
and library path information, run the following script.
SQL
, @script = N'
Package NVARCHAR(255)
, Version NVARCHAR(100)
, Depends NVARCHAR(4000)
, License NVARCHAR(1000)
, LibPath NVARCHAR(2000)
));
Results
Next steps
To learn how to use data structures when using R with SQL machine learning, follow this
quickstart:
Handle data types and objects using R with SQL machine learning
Quickstart: Data structures, data types,
and objects using R with SQL machine
learning
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this quickstart, you'll learn how to use data structures and data types when using R in
Azure SQL Managed Instance Machine Learning Services. You'll learn about moving data
between R and SQL Managed Instance, and the common issues that might occur.
Prerequisites
You need the following prerequisites to run this quickstart.
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.
First, let's experiment with some R basic R objects - vectors, matrices, and lists - and see
how conversion to a data frame changes the output passed to SQL Server.
Compare these two "Hello World" scripts in R. The scripts look almost identical, but the
first returns a single column of three values, whereas the second returns three columns
with a single value each.
Example 1
SQL
EXECUTE sp_execute_external_script
@language = N'R'
Example 2
SQL
EXECUTE sp_execute_external_script
@language = N'R'
The answer can usually be found by using the R str() command. Add the function
str(object_name) anywhere in your R script to have the data schema of the specified R
To figure out why Example 1 and Example 2 have such different results, insert the line
str(OutputDataSet) at the end of the @script variable definition in each statement, like
this:
SQL
EXECUTE sp_execute_external_script
@language = N'R'
str(OutputDataSet);'
SQL
EXECUTE sp_execute_external_script
@language = N'R',
str(OutputDataSet);' ,
Now, review the text in Messages to see why the output is different.
Results - Example 1
SQL
Results - Example 2
SQL
As you can see, a slight change in R syntax had a big effect on the schema of the results.
We won't go into why, but the differences in R data types are explained in details in the
Data Structures section in "Advanced R" by Hadley Wickham .
For now, just be aware that you need to check the expected results when coercing R
objects into data frames.
Tip
You can also use R identity functions, such as is.matrix , is.vector , to return
information about the internal data structure.
Implicit conversion of data objects
Each R data object has its own rules for how values are handled when combined with
other data objects if the two data objects have the same number of dimensions, or if
any data object contains heterogeneous data types.
SQL
VALUES (1);
VALUES (10);
VALUES (100);
GO
For example, assume you run the following statement to perform matrix multiplication
using R. You multiply a single-column matrix with the three values by an array with four
values, and expect a 4x3 matrix as a result.
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
x <- as.matrix(InputDataSet);
y <- array(12:15);
WITH RESULT SETS (([Col1] int, [Col2] int, [Col3] int, Col4 int));
Under the covers, the column of three values is converted to a single-column matrix.
Because a matrix is just a special case of an array in R, the array y is implicitly coerced to
a single-column matrix to make the two arguments conform.
Results
12 13 14 15
Col1 Col2 Col3 Col4
However, note what happens when you change the size of the array y .
SQL
execute sp_execute_external_script
@language = N'R'
, @script = N'
x <- as.matrix(InputDataSet);
y <- array(12:14);
Results
Col1
1542
Why? In this case, because the two arguments can be handled as vectors of the same
length, R returns the inner product as a matrix. This is the expected behavior according
to the rules of linear algebra; however, it could cause problems if your downstream
application expects the output schema to never change!
Tip
Getting errors? Make sure that you're running the stored procedure in the context
of the database that contains the table, and not in master or another database.
Also, we suggest that you avoid using temporary tables for these examples. Some R
clients will terminate a connection between batches, deleting temporary tables.
For example, the following script defines a numeric array of length 6 and stores it in the
R variable df1 . The numeric array is then combined with the integers of the RTestData
table, which contains three (3) values, to make a new data frame, df2 .
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
WITH RESULT SETS (( [Col2] int not null, [Col3] int not null ));
To fill out the data frame, R repeats the elements retrieved from RTestData as many
times as needed to match the number of elements in the array df1 .
Results
Col2 Col3
1 1
10 2
100 3
1 4
10 5
100 6
Remember that a data frame only looks like a table, and is actually a list of vectors.
SQL Server pushes the data from the query to the R process managed by the
Launchpad service and converts it to an internal representation for greater
efficiency.
The R runtime loads the data into a data.frame variable and performs its own
operations on the data.
The database engine returns the data to SQL Server using a secured internal
connection and presents the data in terms of SQL Server data types.
You get the data by connecting to SQL Server using a client or network library that
can issue SQL queries and handle tabular data sets. This client application can
potentially affect the data in other ways.
To see how this works, run a query such as this one on the AdventureWorksDW data
warehouse. This view returns sales data used in creating forecasts.
SQL
USE AdventureWorksDW
GO
SELECT ReportingDate
, Amount
FROM [AdventureWorksDW].[dbo].[vTimeSeries]
7 Note
You can use any version of AdventureWorks, or create a different query using a
database of your own. The point is to try to handle some data that contains text,
datetime and numeric values.
Now, try pasting this query as the input to the stored procedure.
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @input_data_1 = N'
SELECT ReportingDate
, Amount
FROM [AdventureWorksDW].[dbo].[vTimeSeries]
If you get an error, you'll probably need to make some edits to the query text. For
example, the string predicate in the WHERE clause must be enclosed by two sets of
single quotation marks.
After you get the query working, review the results of the str function to see how R
treats the input data.
Results
text
The datetime column has been processed using the R data type, POSIXct.
The text column "ProductSeries" has been identified as a factor, meaning a
categorical variable. String values are handled as factors by default. If you pass a
string to R, it is converted to an integer for internal use, and then mapped back to
the string on output.
Summary
From even these short examples, you can see the need to check the effects of data
conversion when passing SQL queries as input. Because some SQL Server data types are
not supported by R, consider these ways to avoid errors:
Test your data in advance and verify columns or values in your schema that could
be a problem when passed to R code.
Specify columns in your input data source individually, rather than using SELECT * ,
and know how each column will be handled.
Perform explicit casts as necessary when preparing your input data, to avoid
surprises.
Avoid passing columns of data (such as GUIDs or rowguids) that cause errors and
aren't useful for modeling.
For more information on supported and unsupported data types, see R libraries and
data types.
Next steps
To learn about writing advanced R functions with SQL machine learning, follow this
quickstart:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this quickstart, you'll learn how to use data structures and data types when using R in
Azure SQL Managed Instance Machine Learning Services. You'll learn about moving data
between R and SQL Managed Instance, and the common issues that might occur.
Prerequisites
You need the following prerequisites to run this quickstart.
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.
For example, the following R code returns 100 numbers on a mean of 50, given a
standard deviation of 3.
To call this line of R from T-SQL, add the R function in the R script parameter of
sp_execute_external_script , like this:
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
What if you'd like to make it easier to generate a different set of random numbers?
That's easy when combined with T-SQL. You define a stored procedure that gets the
arguments from the user, then pass those arguments into the R script as variables.
SQL
@param1 INT
, @param2 INT
, @param3 INT
AS
, @script = N'
, @mynumbers = @param1
, @mymean = @param2
, @mysd = @param3
The first line defines each of the SQL input parameters that are required when the
stored procedure is executed.
The line beginning with @params defines all variables used by the R code, and the
corresponding SQL data types.
The lines that immediately follow map the SQL parameter names to the
corresponding R variable names.
Now that you've wrapped the R function in a stored procedure, you can easily call the
function and pass in different values, like this:
SQL
For example, you might use the system timing functions in R, such as system.time and
proc.time , to capture the time used by R processes and analyze performance issues. For
an example, see the tutorial Create Data Features where R timing functions are
embedded in the solution.
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
library(utils);
# Run R processes
For other useful functions, see Use R code profiling functions to improve performance.
Next steps
To create a machine learning model using R with SQL machine learning, follow this
quickstart:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this quickstart, you'll create and train a predictive model using T. You'll save the
model to a table in your SQL Server instance, and then use the model to predict values
from new data using Azure SQL Managed Instance Machine Learning Services.
You'll create and execute two stored procedures running in SQL. The first one uses the
mtcars dataset included with R and generates a simple generalized linear model (GLM)
that predicts the probability that a vehicle has been fitted with a manual transmission.
The second procedure is for scoring - it calls the model generated in the first procedure
to output a set of predictions based on new data. By placing R code in a SQL stored
procedure, operations are contained in SQL, are reusable, and can be called by other
stored procedures and client applications.
Tip
If you need a refresher on linear models, try this tutorial which describes the
process of fitting a model using rxLinMod: Fitting Linear Models
Prerequisites
You need the following prerequisites to run this quickstart.
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
A tool for running SQL queries that contain R scripts. This quickstart uses Azure
Data Studio.
Create the model
To create the model, you'll create source data for training, create the model and train it
using the data, then store the model in a database where it can be used to generate
predictions with new data.
SQL
);
SQL
, @input_data_1 = N''
, @output_data_1_name = N'MTCars';
Tip
Many datasets, small and large, are included with the R runtime. To get a list
of datasets installed with R, type library(help="datasets") from an R
command prompt.
Create and train the model
The car speed data contains two columns, both numeric: horsepower ( hp ) and weight
( wt ). From this data, you'll create a generalized linear model (GLM) that estimates the
probability that a vehicle has been fitted with a manual transmission.
To build the model, you define the formula inside your R code, and pass the data as an
input parameter.
SQL
GO
AS
BEGIN
EXEC sp_execute_external_script
@language = N'R'
, @input_data_1_name = N'MTCarsData'
, @output_data_1_name = N'trained_model'
END;
GO
SQL
CREATE TABLE GLM_models (
);
2. Run the following Transact-SQL statement to call the stored procedure, generate
the model, and save it to the table you created.
SQL
EXEC generate_GLM;
Tip
If you run this code a second time, you get this error: "Violation of PRIMARY
KEY constraint...Cannot insert duplicate key in object
dbo.stopping_distance_models". One option for avoiding this error is to
update the name for each new model. For example, you could change the
name to something more descriptive, and include the model type, the day
you created it, and so forth.
SQL
UPDATE GLM_models
SQL
CREATE TABLE dbo.NewMTCars(
, am INT NULL
GO
GO
Over time, the table might contain multiple R models, all built using different
parameters or algorithms, or trained on different subsets of data. In this example, we'll
use the model named default model .
SQL
EXEC sp_execute_external_script
@language = N'R'
, @script = N'
str(predicted.am);
'
, @input_data_1_name = N'NewMTCars'
, @glmmodel = @glmmodel
Use a SELECT statement to get a single model from the table, and pass it as an
input parameter.
After retrieving the model from the table, call the unserialize function on the
model.
Apply the predict function with appropriate arguments to the model, and provide
the new input data.
7 Note
In the example, the str function is added during the testing phase, to check the
schema of data being returned from R. You can remove the statement later.
The column names used in the R script are not necessarily passed to the stored
procedure output. Here the WITH RESULTS clause is used to define some new
column names.
Results
It's also possible to use the PREDICT (Transact-SQL) statement to generate a predicted
value or score based on a stored model.
Next steps
For more information on tutorials for R with SQL machine learning, see:
R tutorials
Python tutorial: Predict ski rental with
linear regression with SQL machine
learning
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this four-part tutorial series, you will use Python and linear regression in Azure SQL
Managed Instance Machine Learning Services to predict the number of ski rentals. The
tutorial uses a Python notebook in Azure Data Studio.
Imagine you own a ski rental business and you want to predict the number of rentals
that you'll have on a future date. This information will help you get your stock, staff, and
facilities ready.
In the first part of this series, you'll get set up with the prerequisites. In parts two and
three, you'll develop some Python scripts in a notebook to prepare your data and train a
machine learning model. Then, in part three, you'll run those Python scripts inside the
database using T-SQL stored procedures.
In part two, you'll learn how to load the data from a database into a Python data frame,
and prepare the data in Python.
In part three, you'll learn how to train a linear regression model in Python.
In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.
Prerequisites
Azure SQL Managed Instance Machine Learning Services - For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
Python IDE - This tutorial uses a Python notebook in Azure Data Studio. For more
information, see How to use notebooks in Azure Data Studio.
SQL query tool - This tutorial assumes you're using Azure Data Studio.
Additional Python packages - The examples in this tutorial series use the following
Python packages that may not be installed by default:
pandas
pyodbc
sklearn
package.
2. Follow the directions in Restore a database to Azure SQL Managed Instance in SQL
Server Management Studio, using these details:
3. You can verify that the restored database exists by querying the dbo.rental_data
table:
SQL
USE TutorialDB;
Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.
Next steps
In part one of this tutorial series, you completed these steps:
To prepare the data from the TutorialDB database, follow part two of this tutorial series:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part two of this four-part tutorial series, you'll prepare data from a database using
Python. Later in this series, you'll use this data to train and deploy a linear regression
model in Python with Azure SQL Managed Instance Machine Learning Services.
" Load the data from the database into a pandas data frame
" Prepare the data in Python by removing some columns
In part three, you'll learn how to train a linear regression machine learning model in
Python.
In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.
Prerequisites
Part two of this tutorial assumes you have completed part one and its
prerequisites.
Create a new Python notebook in Azure Data Studio and run the script below.
The Python script below imports the dataset from the dbo.rental_data table in your
database to a pandas data frame df.
In the connection string, replace connection details as needed. To use Windows
authentication with an ODBC connection string, specify Trusted_Connection=Yes;
instead of the UID and PWD parameters.
Python
import pyodbc
import pandas
df = pandas.read_sql(sql=query_str, con=conn_str)
results
0 2014 1 20 445 2 1 0
1 2014 2 13 40 5 0 0
2 2013 3 10 456 1 0 0
3 2014 3 31 38 2 0 0
4 2014 4 24 23 5 0 0
448 2013 2 19 57 3 0 1
449 2015 3 18 26 4 0 0
450 2015 3 24 29 3 0 1
451 2014 3 26 50 4 0 1
Filter the columns from the dataframe to remove ones we don't want to use in the
training. Rentalcount should not be included as it is the target of the predictions.
Python
columns = df.columns.tolist()
Note the data the training set will have access to:
results
1 2 13 5 0 0
3 3 31 2 0 0
7 3 8 7 0 0
15 3 4 2 0 1
22 1 18 1 0 0
416 4 13 1 0 1
421 1 21 3 0 1
438 2 19 4 0 1
441 2 3 3 0 1
447 1 4 6 0 1
Next steps
In part two of this tutorial series, you completed these steps:
Load the data from the database into a pandas data frame
Prepare the data in Python by removing some columns
To train a machine learning model that uses data from the TutorialDB database, follow
part three of this tutorial series:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part three of this four-part tutorial series, you'll train a linear regression model in
Python. In the next part of this series, you'll deploy this model in an Azure SQL Managed
Instance database with Machine Learning Services.
In part two, you learned how to load the data from a database into a Python data frame,
and prepare the data in Python.
In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run in on the server to make predictions based on new data.
Prerequisites
Part three of this tutorial assumes you have completed part one and its
prerequisites.
Python
target = "Rentalcount"
# Select anything not in the training set and put it in the testing set.
test = df.loc[~df.index.isin(train.index)]
lin_model = LinearRegression()
lin_model.fit(train[columns], train[target])
results
Make predictions
Use a predict function to predict the rental counts using the model lin_model .
Python
lin_predictions = lin_model.predict(test[columns])
print("Predictions:", lin_predictions)
# Compute error between our test predictions and the actual values.
results
207.65572019]
Next steps
In part three of this tutorial series, you completed these steps:
To deploy the machine learning model you've created, follow part four of this tutorial
series:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part four of this four-part tutorial series, you'll deploy a linear regression model
developed in Python into an Azure SQL Managed Instance database using Machine
Learning Services.
In part two, you learned how to load the data from a database into a Python data frame,
and prepare the data in Python.
In part three, you learned how to train a linear regression machine learning model in
Python.
Prerequisites
Part four of this tutorial assumes you have completed part one and its
prerequisites.
Run the following T-SQL statement in Azure Data Studio to create the stored procedure
to train the model.
SQL
-- Stored procedure that trains and generates a Python model using the
rental_data and a linear regression algorithm
go
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
import pickle
df = rental_train_data
columns = df.columns.tolist()
target = "RentalCount"
lin_model = LinearRegression()
lin_model.fit(df[columns], df[target])
trained_model = pickle.dumps(lin_model)'
, @input_data_1_name = N'rental_train_data'
END;
GO
1. Run the following T-SQL statement in Azure Data Studio to create a table called
dbo.rental_py_models which is used to store the model.
SQL
USE TutorialDB;
GO
);
GO
2. Save the model to the table as a binary object, with the model name linear_model.
SQL
SQL
GO
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import pickle
import pandas
rental_model = pickle.loads(py_model)
df = rental_score_data
columns = df.columns.tolist()
target = "RentalCount"
lin_predictions = rental_model.predict(df[columns])
print(lin_predictions)
# Compute error between the test predictions and the actual values.
#print(lin_mse)
predictions_df = pandas.DataFrame(lin_predictions)
'
, @input_data_1_name = N'rental_score_data'
, @py_model = @py_model
END;
GO
SQL
GO
) ON [PRIMARY]
GO
SQL
--Insert the results of the predictions for test set into a table
You have successfully created, trained, and deployed a model. You then used that model
in a stored procedure to predict values based on new data.
Next steps
In part four of this tutorial series, you completed these steps:
To learn more about using Python with SQL machine learning, see:
Python tutorials
Python tutorial: Categorizing customers
using k-means clustering with SQL
machine learning
Article • 04/17/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this four-part tutorial series, use Python to develop and deploy a K-Means clustering
model in Azure SQL Managed Instance Machine Learning Services to cluster customer
data.
In part one of this series, set up the prerequisites for the tutorial and then restore a
sample dataset to a database. Later in this series, use this data to train and deploy a
clustering model in Python with SQL machine learning.
In parts two and three of this series, develop some Python scripts in an Azure Data
Studio notebook to analyze and prepare your data and train a machine learning model.
Then, in part four, run those Python scripts inside a database using stored procedures.
Clustering can be explained as organizing data into groups where members of a group
are similar in some way. For this tutorial series, imagine you own a retail business. Use
the K-Means algorithm to perform the clustering of customers in a dataset of product
purchases and returns. By clustering customers, you can focus your marketing efforts
more effectively by targeting specific groups. K-Means clustering is an unsupervised
learning algorithm that looks for patterns in data based on similarities.
In part two, learn how to prepare the data from a database to perform clustering.
In part three, learn how to create and train a K-Means clustering model in Python.
In part four, learn how to create a stored procedure in a database that can perform
clustering in Python based on new data.
Prerequisites
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
Azure Data Studio. use a notebook in Azure Data Studio for both Python and SQL.
For more information about notebooks, see How to use notebooks in Azure Data
Studio.
Additional Python packages - The examples in this tutorial series use Python
packages that you may or may not have installed.
Open an Administrative Command Prompt and change to the installation path for
the version of Python you use in Azure Data Studio. For example, cd
%LocalAppData%\Programs\Python\Python37-32 . Then run the following commands to
install any of these packages that aren't already installed. Ensure these packages
are installed in the correct Python installation location. You can use the option -t
to specify the destination directory.
Console
3. You can verify that the dataset exists after you have restored the database by
querying the dbo.customer table:
SQL
USE tpcxbb_1gb;
Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part one of this tutorial series, you completed these steps:
To prepare the data for the machine learning model, follow part two of this tutorial
series:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part two of this four-part tutorial series, you'll restore and prepare the data from a
database using Python. Later in this series, you'll use this data to train and deploy a
clustering model in Python with Azure SQL Managed Instance Machine Learning
Services.
In part one, you installed the prerequisites and restored the sample database.
In part three, you'll learn how to create and train a K-Means clustering model in Python.
In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in Python based on new data.
Prerequisites
Part two of this tutorial assumes you have fulfilled the prerequisites of part one.
Separate customers
To prepare for clustering customers, you'll first separate customers along the following
dimensions:
orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency
Open a new notebook in Azure Data Studio and enter the following script.
Python
# Load packages.
import pyodbc
import numpy as np
import pandas as pd
############################################################################
####################
############################################################################
####################
input_query = '''SELECT
ss_customer_sk AS customer,
COALESCE(returns_count, 0) AS frequency
FROM
SELECT
ss_customer_sk,
COUNT(distinct(ss_ticket_number)) AS orders_count,
COUNT(ss_item_sk) AS orders_items,
FROM store_sales s
GROUP BY ss_customer_sk
) orders
SELECT
sr_customer_sk,
count(distinct(sr_ticket_number)) as returns_count,
COUNT(sr_item_sk) as returns_items,
FROM store_returns
column_info = {
Python
Now display the beginning of the data frame to verify it looks correct.
Python
results
Rows Read: 37336, Total Rows Processed: 37336, Total Chunk Time: 0.172
seconds
Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part two of this tutorial series, you completed these steps:
To create a machine learning model that uses this customer data, follow part three of
this tutorial series:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part three of this four-part tutorial series, you'll build a K-Means model in Python to
perform clustering. In the next part of this series, you'll deploy this model in a database
with Azure SQL Managed Instance Machine Learning Services.
In part one, you installed the prerequisites and restored the sample database.
In part two, you learned how to prepare the data from a database to perform clustering.
In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in Python based on new data.
Prerequisites
Part three of this tutorial assumes you have fulfilled the prerequisites of part one,
and completed the steps in part two.
The algorithm accepts two inputs: The data itself, and a predefined number "k"
representing the number of clusters to generate.
The output is k clusters with the input
data partitioned among the clusters.
The goal of K-means is to group the items into k clusters such that all items in same
cluster are as similar to each other, and as different from items in other clusters, as
possible.
To determine the number of clusters for the algorithm to use, use a plot of the within
groups sum of squares, by number of clusters extracted. The appropriate number of
clusters to use is at the bend or "elbow" of the plot.
Python
############################################################################
####################
############################################################################
####################
cdata = customer_data
K = range(1, 20)
KM = (sk_cluster.KMeans(n_clusters=k).fit(cdata) for k in K)
plt.grid(True)
plt.xlabel('Number of clusters')
plt.show()
Based on the graph, it looks like k = 4 would be a good value to try. That k value will
group the customers into four clusters.
Perform clustering
In the following Python script, you'll use the KMeans function from the sklearn package.
Python
############################################################################
####################
############################################################################
####################
# It looks like k=4 is a good number to use based on the elbow graph.
n_clusters = 4
est = means_cluster.fit(customer_data[columns])
clusters = est.labels_
customer_data['cluster'] = clusters
for c in range(n_clusters):
cluster_members=customer_data[customer_data['cluster'] == c][:]
print('Cluster{}(n={}):'.format(c, len(cluster_members)))
print('-'* 17)
print(customer_data.groupby(['cluster']).mean())
Look at the clustering mean values and cluster sizes printed from the previous script.
results
Cluster0(n=31675):
-------------------
Cluster1(n=4989):
-------------------
Cluster2(n=1):
-------------------
Cluster3(n=671):
-------------------
cluster
The four cluster means are given using the variables defined in part one:
orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency
Data mining using K-Means often requires further analysis of the results, and further
steps to better understand each cluster, but it can provide some good leads.
Here are a
couple ways you could interpret these results:
Cluster 0 seems to be a group of customers that are not active (all values are zero).
Cluster 3 seems to be a group that stands out in terms of return behavior.
Cluster 0 is a set of customers who are clearly not active. Perhaps you can target
marketing efforts towards this group to trigger an interest for purchases. In the next
step, you'll query the database for the email addresses of customers in cluster 0, so that
you can send a marketing email to them.
Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part three of this tutorial series, you completed these steps:
To deploy the machine learning model you've created, follow part four of this tutorial
series:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part four of this four-part tutorial series, you'll deploy a clustering model, developed
in Python, into a database using Azure SQL Managed Instance Machine Learning
Services.
In order to perform clustering on a regular basis, as new customers are registering, you
need to be able call the Python script from any App. To do that, you can deploy the
Python script in a database by putting the Python script inside a SQL stored procedure.
Because your model executes in the database, it can easily be trained against data
stored in the database.
In this section, you'll move the Python code you just wrote onto the server and deploy
clustering.
In part one, you installed the prerequisites and restored the sample database.
In part two, you learned how to prepare the data from a database to perform clustering.
In part three, you learned how to create and train a K-Means clustering model in Python.
Prerequisites
Part four of this tutorial series assumes you have fulfilled the prerequisites of part
one, and completed the steps in part two and part three.
SQL
USE [tpcxbb_1gb]
GO
GO
AS
BEGIN
DECLARE
SELECT
ss_customer_sk AS customer,
FROM
SELECT
ss_customer_sk,
COUNT(distinct(ss_ticket_number)) AS orders_count,
COUNT(ss_item_sk) AS orders_items,
FROM store_sales s
GROUP BY ss_customer_sk
) orders
SELECT
sr_customer_sk,
count(distinct(sr_ticket_number)) as returns_count,
COUNT(sr_item_sk) as returns_items,
FROM store_returns
GROUP BY sr_customer_sk
) returned ON ss_customer_sk=sr_customer_sk
'
EXEC sp_execute_external_script
@language = N'Python'
, @script = N'
import pandas as pd
customer_data = my_input_data
n_clusters = 4
#Perform clustering
est = KMeans(n_clusters=n_clusters,
random_state=111).fit(customer_data[["orderRatio","itemsRatio","monetaryRati
o","frequency"]])
clusters = est.labels_
customer_data["cluster"] = clusters
OutputDataSet = customer_data
'
, @input_data_1 = @input_query
, @input_data_1_name = N'my_input_data'
END;
GO
Perform clustering
Now that you've created the stored procedure, execute the following script to perform
clustering using the procedure.
SQL
GO
) ON [PRIMARY]
GO
EXEC [dbo].[py_generate_customer_return_clusters];
Suppose you want to send a promotional email to customers in cluster 0, the group that
was inactive (you can see how the four clusters were described in part three of this
tutorial). The following code selects the email addresses of customers in cluster 0.
SQL
USE [tpcxbb_1gb]
FROM dbo.customer
JOIN
[dbo].[py_customer_clusters] as c
ON c.Customer = customer.c_customer_sk
WHERE c.cluster = 0
You can change the c.cluster value to return email addresses for customers in other
clusters.
Clean up resources
When you're finished with this tutorial, you can delete the tpcxbb_1gb database.
Next steps
In part four of this tutorial series, you completed these steps:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In this five-part tutorial series for SQL programmers, you'll learn about Python
integration in Machine Learning Services in Azure SQL Managed Instance.
You'll build and deploy a Python-based machine learning solution using a sample
database on SQL Server. You'll use T-SQL, Azure Data Studio or SQL Server Management
Studio, and a database instance with SQL machine learning and Python language
support.
This tutorial series introduces you to Python functions used in a data modeling
workflow. Parts include data exploration, building and training a binary classification
model, and model deployment. You'll use sample data from the New York City Taxi and
Limousine Commission. The model you'll build predicts whether a trip is likely to result
in a tip based on the time of day, distance traveled, and pick-up location.
In the first part of this series, you'll install the prerequisites and restore the sample
database. In parts two and three, you'll develop some Python scripts to prepare your
data and train a machine learning model. Then, in parts four and five, you'll run those
Python scripts inside the database using T-SQL stored procedures.
" Install prerequisites
" Restore the sample database
In part two, you'll explore the sample data and generate some plots.
In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
7 Note
This tutorial is available in both R and Python. For the R version, see R tutorial:
Predict NYC taxi fares with binary classification.
Prerequisites
Grant permissions to execute Python scripts
All tasks can be done using Transact-SQL stored procedures in Azure Data Studio or
Management Studio.
This tutorial series assumes familiarity with basic database operations such as creating
databases and tables, importing data, and writing SQL queries. It does not assume you
know Python and all Python code is provided.
Development and testing of the actual code is best performed using a dedicated
development environment. However, after the script is fully tested, you can easily deploy
it to SQL Server using Transact-SQL stored procedures in the familiar environment of
Azure Data Studio or Management Studio. Wrapping external code in stored procedures
is the primary mechanism for operationalizing code in SQL Server.
After the model has been saved to the database, you can call the model for predictions
from Transact-SQL by using stored procedures.
Whether you're a SQL programmer new to Python, or a Python developer new to SQL,
this five-part tutorial series introduces a typical workflow for conducting in-database
analytics with Python and SQL Server.
Next steps
In this article, you:
" Installed prerequisites
" Restored the sample database
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part two of this five-part tutorial series, you'll explore the sample data and generate
some plots. Later, you'll learn how to serialize graphics objects in Python, and then
deserialize those objects and make plots.
In part one, you installed the prerequisites and restored the sample database.
In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
The original dataset used separate files for the taxi identifiers and trip records.
We've joined the two original datasets on the columns medallion, hack_license, and
pickup_datetime.
The original dataset spanned many files and was quite large. We've downsampled
to get just 1% of the original number of records. The current data table has
1,703,957 rows and 23 columns.
Taxi identifiers
Each trip record includes the pickup and drop-off location and time, and the trip
distance.
Each fare record includes payment information such as the payment type, total amount
of payment, and the tip amount.
The last three columns can be used for various machine learning tasks. The tip_amount
column contains continuous numeric values and can be used as the label column for
regression analysis. The tipped column has only yes/no values and is used for binary
classification. The tip_class column has multiple class labels and therefore can be used
as the label for multi-class classification tasks.
The values used for the label columns are all based on the tip_amount column, using
these business rules:
Class 0: tip_amount = $0
The variable @query defines the query text SELECT tipped FROM
nyctaxi_sample , which is passed to the Python code block as the argument to
the script input variable, @input_data_1 .
The Python script is fairly simple: matplotlib figure objects are used to make
the histogram and scatter plot, and these objects are then serialized using the
pickle library.
SQL
GO
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import matplotlib
matplotlib.use("Agg")
import pandas as pd
import pickle
fig_handle = plt.figure()
plt.hist(InputDataSet.tipped)
plt.xlabel("Tipped")
plt.ylabel("Counts")
plt.title("Histogram, Tipped")
plt.clf()
plt.hist(InputDataSet.tip_amount)
plt.xlabel("Tip amount ($)")
plt.ylabel("Counts")
plt.clf()
plt.hist(InputDataSet.fare_amount)
plt.ylabel("Counts")
plt.clf()
plt.clf()
',
@input_data_1 = @query
END
GO
2. Now run the stored procedure with no arguments to generate a plot from the data
hard-coded as the input query.
SQL
EXEC [dbo].[PyPlotMatplotlib]
SQL
plot
0xFFD8FFE000104A4649...
0xFFD8FFE000104A4649...
0xFFD8FFE000104A4649...
0xFFD8FFE000104A4649...
4. From a Python client, you can now connect to the SQL Server instance that
generated the binary plot objects, and view the plots.
To do this, run the following Python code, replacing the server name, database
name, and credentials as appropriate (for Windows authentication, replace the UID
and PWD parameters with Trusted_Connection=True ). Make sure the Python version
is the same on the client and the server. Also make sure that the Python libraries
on your client (such as matplotlib) are the same or higher version relative to the
libraries installed on the server. To view a list of installed packages and their
versions, see Get Python package information.
Python
%matplotlib notebook
import pyodbc
import pickle
import os
cursor = cnxn.cursor()
cursor.execute("EXECUTE [dbo].[PyPlotMatplotlib]")
tables = cursor.fetchall()
fig = pickle.loads(tables[i][0])
fig.savefig(str(i)+'.png')
5. If the connection is successful, you should see a message like the following:
6. The output file is created in the Python working directory. To view the plot, locate
the Python working directory, and open the file. The following image shows a plot
saved on the client computer.
Next steps
In this article, you:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part three of this five-part tutorial series, you'll learn how to create features from raw
data by using a Transact-SQL function. You'll then call that function from a SQL stored
procedure to create a table that contains the feature values.
The process of feature engineering, creating features from the raw data, can be a critical
step in advanced analytics modeling.
In part one, you installed the prerequisites and restored the sample database.
In part two, you explored the sample data and generated some plots.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
You'll use one custom T-SQL function, fnCalculateDistance, to compute the distance
using the Haversine formula, and use a second custom T-SQL function,
fnEngineerFeatures, to create a table containing all the features.
SQL
RETURNS float
AS
BEGIN
-- Convert to radians
-- Calculate distance
--Convert to miles
IF @distance <> 0
BEGIN
END
RETURN @distance
END
GO
Notes:
SQL
@passenger_count int = 0,
@trip_distance float = 0,
@trip_time_in_secs int = 0,
@pickup_latitude float = 0,
@pickup_longitude float = 0,
@dropoff_latitude float = 0,
@dropoff_longitude float = 0)
RETURNS TABLE
AS
RETURN
SELECT
@passenger_count AS passenger_count,
@trip_distance AS trip_distance,
@trip_time_in_secs AS trip_time_in_secs,
[dbo].[fnCalculateDistance](@pickup_latitude, @pickup_longitude,
@dropoff_latitude, @dropoff_longitude) AS direct_distance
GO
To verify that this function works, you can use it to calculate the geographical distance
for those trips where the metered distance was 0 but the pick-up and drop-off locations
were different.
SQL
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) AS direct_distance
FROM nyctaxi_sample
As you can see, the distance reported by the meter doesn't always correspond to
geographical distance. This is why feature engineering is important.
In the next part, you'll learn how to use these data features to create and train a
machine learning model using Python.
Next steps
In this article, you:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part four of this five-part tutorial series, you'll learn how to train a machine learning
model using the Python packages scikit-learn and revoscalepy. These Python libraries
are already installed with SQL Server machine learning.
You'll load the modules and call the necessary functions to create and train the model
using a SQL Server stored procedure. The model requires the data features you
engineered in earlier parts of this tutorial series. Finally, you'll save the trained model to
a SQL Server table.
In part one, you installed the prerequisites and restored the sample database.
In part two, you explored the sample data and generated some plots.
In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
SQL
DROP PROCEDURE IF EXISTS PyTrainTestSplit;
GO
AS
GO
2. To divide your data using a custom split, run the stored procedure, and provide an
integer parameter that represents the percentage of data to allocate to the
training set. For example, the following statement would allocate 60% of data to
the training set.
SQL
EXEC PyTrainTestSplit 60
GO
The stored procedure PyTrainScikit creates a tip prediction model using the scikit-
learn package.
The stored procedure TrainTipPredictionModelRxPy creates a tip prediction model
using the revoscalepy package.
Each stored procedure uses the input data you provide to create and train a logistic
regression model. All Python code is wrapped in the system stored procedure,
sp_execute_external_script.
To make it easier to retrain the model on new data, you wrap the call to
sp_execute_external_script in another stored procedure, and pass in the new training
data as a parameter. This section will walk you through that process.
PyTrainScikit
1. In Management Studio, open a new Query window and run the following
statement to create the stored procedure PyTrainScikit. The stored procedure
contains a definition of the input data, so you don't need to provide an input
query.
SQL
GO
AS
BEGIN
EXEC sp_execute_external_script
@language = N'Python',
@script = N'
import numpy
import pickle
X = InputDataSet[["passenger_count", "trip_distance",
"trip_time_in_secs", "direct_distance"]]
y = numpy.ravel(InputDataSet[["tipped"]])
SKLalgo = LogisticRegression()
logitObj = SKLalgo.fit(X, y)
##Serialize model
trained_model = pickle.dumps(logitObj)
',
@input_data_1 = N'
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance
from nyctaxi_sample_training
',
@input_data_1_name = N'InputDataSet',
END;
GO
2. Run the following SQL statements to insert the trained model into table
nyc_taxi_models.
SQL
Processing of the data and fitting the model might take a couple of minutes.
Messages that would be piped to Python's stdout stream are displayed in the
Messages window of Management Studio. For example:
text
3. Open the table nyc_taxi_models. You can see that one new row has been added,
which contains the serialized model in the column model.
text
SciKit_model
0x800363736B6C6561726E2E6C696E6561....
TrainTipPredictionModelRxPy
This stored procedure uses the revoscalepy Python package. It contains objects,
transformation, and algorithms similar to those provided for the R language's
RevoScaleR package.
By using revoscalepy, you can create remote compute contexts, move data between
compute contexts, transform data, and train predictive models using popular algorithms
such as logistic and linear regression, decision trees, and more. For more information,
see revoscalepy module in SQL Server and revoscalepy function reference.
1. In Management Studio, open a new Query window and run the following
statement to create the stored procedure TrainTipPredictionModelRxPy. Because
the stored procedure already includes a definition of the input data, you don't
need to provide an input query.
SQL
DROP PROCEDURE IF EXISTS TrainTipPredictionModelRxPy;
GO
AS
BEGIN
EXEC sp_execute_external_script
@language = N'Python',
@script = N'
import numpy
import pickle
## Serialize model
trained_model = pickle.dumps(logitObj)
',
@input_data_1 = N'
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance
from nyctaxi_sample_training
',
@input_data_1_name = N'InputDataSet',
END;
GO
This stored procedure performs the following steps as part of model training:
2. Run the stored procedure as follows to insert the trained revoscalepy model into
the table nyc_taxi_models.
SQL
Processing of the data and fitting the model might take a while. Messages that
would be piped to Python's stdout stream are displayed in the Messages window
of Management Studio. For example:
text
3. Open the table nyc_taxi_models. You can see that one new row has been added,
which contains the serialized model in the column model.
text
revoscalepy_model
0x8003637265766F7363616c....
In the next part of this tutorial, you'll use the trained models to create predictions.
Next steps
In this article, you:
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part five of this five-part tutorial series, you'll learn how to operationalize the models
that you trained and saved in the previous part.
This part of the tutorial demonstrates two methods for creating predictions based on a
Python model: batch scoring and scoring row by row.
Batch scoring: To provide multiple rows of input data, pass a SELECT query as an
argument to the stored procedure. The result is a table of observations
corresponding to the input cases.
Individual scoring: Pass a set of individual parameter values as input. The stored
procedure returns a single row or value.
All the Python code needed for scoring is provided as part of the stored procedures.
In part one, you installed the prerequisites and restored the sample database.
In part two, you explored the sample data and generated some plots.
In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.
In part four, you loaded the modules and called the necessary functions to create and
train the model using a SQL Server stored procedure.
Batch scoring
The first two stored procedures created using the following scripts illustrate the basic
syntax for wrapping a Python prediction call in a stored procedure. Both stored
procedures require a table of data as inputs.
The name of the model to use is provided as input parameter to the stored
procedure. The stored procedure loads the serialized model from the database
table nyc_taxi_models .table, using the SELECT statement in the stored procedure.
The serialized model is stored in the Python variable mod for further processing
using Python.
The new cases that need to be scored are obtained from the Transact-SQL query
specified in @input_data_1 . As the query data is read, the rows are saved in the
default data frame, InputDataSet .
Both stored procedure use functions from sklearn to calculate an accuracy metric,
AUC (area under curve). Accuracy metrics such as AUC can only be generated if
you also provide the target label (the tipped column). Predictions do not need the
target label (variable y ), but the accuracy metric calculation does.
Therefore, if you don't have target labels for the data to be scored, you can modify
the stored procedure to remove the AUC calculations, and return only the tip
probabilities from the features (variable X in the stored procedure).
PredictTipSciKitPy
Run the following T-SQL statements to create the stored procedure PredictTipSciKitPy .
This stored procedure requires a model based on the scikit-learn package, because it
uses functions specific to that package.
The data frame containing inputs is passed to the predict_proba function of the logistic
regression model, mod . The predict_proba function ( probArray = mod.predict_proba(X) )
returns a float that represents the probability that a tip (of any amount) will be given.
SQL
GO
AS
BEGIN
EXEC sp_execute_external_script
@language = N'Python',
@script = N'
import pickle;
import numpy;
mod = pickle.loads(lmodel2)
y = numpy.ravel(InputDataSet[["tipped"]])
probArray = mod.predict_proba(X)
probList = []
for i in range(len(probArray)):
probList.append((probArray[i])[1])
probArray = numpy.asarray(probList)
',
@input_data_1 = @inquery,
@input_data_1_name = N'InputDataSet',
@lmodel2 = @lmodel2
END
GO
PredictTipRxPy
Run the following T-SQL statements to create the stored procedure PredictTipRxPy .
This stored procedure uses the same inputs and creates the same type of scores as the
previous stored procedure, but it uses functions from the revoscalepy package provided
with SQL Server machine learning.
SQL
GO
AS
BEGIN
EXEC sp_execute_external_script
@language = N'Python',
@script = N'
import pickle;
import numpy;
mod = pickle.loads(lmodel2)
y = numpy.ravel(InputDataSet[["tipped"]])
probArray = rx_predict(mod, X)
probList = probArray["tipped_Pred"].values
probArray = numpy.asarray(probList)
',
@input_data_1 = @inquery,
@input_data_1_name = N'InputDataSet',
@lmodel2 = @lmodel2
END
GO
By passing those arguments to the stored procedure, you can select a particular model
or change the data used for scoring.
1. To use the scikit-learn model for scoring, call the stored procedure
PredictTipSciKitPy, passing the model name and query string as inputs.
SQL
SET @query_string='
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance
from nyctaxi_sample_testing'
The stored procedure returns predicted probabilities for each trip that was passed
in as part of the input query.
If you're using SSMS (SQL Server Management Studio) for running queries, the
probabilities will appear as a table in the Results pane. The Messages pane outputs
the accuracy metric (AUC or area under curve) with a value of around 0.56.
2. To use the revoscalepy model for scoring, call the stored procedure
PredictTipRxPy, passing the model name and query string as inputs.
SQL
SET @query_string='
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance
from nyctaxi_sample_testing'
Single-row scoring
Sometimes, instead of batch scoring, you might want to pass in a single case, getting
values from an application, and returning a single result based on those values. For
example, you could set up an Excel worksheet, web application, or report to call the
stored procedure and pass to it inputs typed or selected by users.
In this section, you'll learn how to create single predictions by calling two stored
procedures:
Both models take as input a series of single values, such as passenger count, trip
distance, and so forth. A table-valued function, fnEngineerFeatures , is used to convert
latitude and longitude values from the inputs to a new feature, direct distance. Part four
contains a description of this table-valued function.
7 Note
It's important that you provide all the input features required by the Python model
when you call the stored procedure from an external application. To avoid errors,
you might need to cast or convert the input data to a Python data type, in addition
to validating data type and data length.
PredictTipSingleModeSciKitPy
The following stored procedure PredictTipSingleModeSciKitPy performs scoring using
the scikit-learn model.
SQL
GO
@trip_distance float = 0,
@trip_time_in_secs int = 0,
@pickup_latitude float = 0,
@pickup_longitude float = 0,
@dropoff_latitude float = 0,
@dropoff_longitude float = 0)
AS
BEGIN
@passenger_count,
@trip_distance,
@trip_time_in_secs,
@pickup_latitude,
@pickup_longitude,
@dropoff_latitude,
@dropoff_longitude)
'
EXEC sp_execute_external_script
@language = N'Python',
@script = N'
import pickle;
import numpy;
mod = pickle.loads(model)
probList = []
probList.append((mod.predict_proba(X)[0])[1])
',
@input_data_1 = @inquery,
@trip_time_in_secs int ,
@pickup_latitude float ,
@pickup_longitude float ,
@dropoff_latitude float ,
@dropoff_longitude float',
@model = @lmodel2,
@passenger_count =@passenger_count ,
@trip_distance=@trip_distance,
@trip_time_in_secs=@trip_time_in_secs,
@pickup_latitude=@pickup_latitude,
@pickup_longitude=@pickup_longitude,
@dropoff_latitude=@dropoff_latitude,
@dropoff_longitude=@dropoff_longitude
GO
PredictTipSingleModeRxPy
The following stored procedure PredictTipSingleModeRxPy performs scoring using the
revoscalepy model.
SQL
GO
@trip_distance float = 0,
@trip_time_in_secs int = 0,
@pickup_latitude float = 0,
@pickup_longitude float = 0,
@dropoff_latitude float = 0,
@dropoff_longitude float = 0)
AS
BEGIN
@passenger_count,
@trip_distance,
@trip_time_in_secs,
@pickup_latitude,
@pickup_longitude,
@dropoff_latitude,
@dropoff_longitude)
'
EXEC sp_execute_external_script
@language = N'Python',
@script = N'
import pickle;
import numpy;
mod = pickle.loads(model)
probArray = rx_predict(mod, X)
probList = []
probList = probArray["tipped_Pred"].values
',
@input_data_1 = @inquery,
@trip_time_in_secs int ,
@pickup_latitude float ,
@pickup_longitude float ,
@dropoff_latitude float ,
@dropoff_longitude float',
@model = @lmodel2,
@passenger_count =@passenger_count ,
@trip_distance=@trip_distance,
@trip_time_in_secs=@trip_time_in_secs,
@pickup_latitude=@pickup_latitude,
@pickup_longitude=@pickup_longitude,
@dropoff_latitude=@dropoff_latitude,
@dropoff_longitude=@dropoff_longitude
GO
The seven required values for these feature columns are, in order:
passenger_count
trip_distance
trip_time_in_secs
pickup_latitude
pickup_longitude
dropoff_latitude
dropoff_longitude
For example:
SQL
SQL
The output from both procedures is a probability of a tip being paid for the taxi trip with
the specified parameters or features.
Conclusion
In this tutorial series, you've learned how to work with Python code embedded in stored
procedures. The integration with Transact-SQL makes it much easier to deploy Python
models for prediction and to incorporate model retraining as part of an enterprise data
workflow.
Next steps
In this article, you:
For more information about Python, see Python extension in SQL Server.
Tutorial: Develop a predictive model in
R with SQL machine learning
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this four-part tutorial series, you will use R and a machine learning model in Azure
SQL Managed Instance Machine Learning Services to predict the number of ski rentals.
Imagine you own a ski rental business and you want to predict the number of rentals
that you'll have on a future date. This information will help you get your stock, staff, and
facilities ready.
In the first part of this series, you'll get set up with the prerequisites. In parts two and
three, you'll develop some R scripts in a notebook to prepare your data and train a
machine learning model. Then, in part three, you'll run those R scripts inside a database
using T-SQL stored procedures.
In part two, you'll learn how to load the data from a database into a Python data frame,
and prepare the data in R.
In part three, you'll learn how to train a machine learning model model in R.
In part four, you'll learn how to store the model in a database, and then create stored
procedures from the R scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.
Prerequisites
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
SQL query tool - This tutorial assumes you're using Azure Data Studio. For more
information, see How to use notebooks in Azure Data Studio.
3. You can verify that the restored database exists by querying the dbo.rental_data
table:
SQL
USE TutorialDB;
Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.
Next steps
In part one of this tutorial series, you completed these steps:
To prepare the data for the machine learning model, follow part two of this tutorial
series:
Prepare data to train a predictive model in R
Tutorial: Prepare data to train a
predictive model in R with SQL machine
learning
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part two of this four-part tutorial series, you'll prepare data from a database using R.
Later in this series, you'll use this data to train and deploy a predictive model in R with
Azure SQL Managed Instance Machine Learning Services.
In part four, you'll learn how to store the model in a database, and then create stored
procedures from the R scripts you developed in parts two and three. The stored
procedures will run on the server to make predictions based on new data.
Prerequisites
Part two of this tutorial assumes you have completed part one and its prerequisites.
Create a new RScript file in RStudio and run the following script. Replace ServerName
with your own connection information.
library(RODBC)
ch <- odbcDriverConnect(connStr)
#Take a look at the structure of the data and the top rows
head(rentaldata)
str(rentaldata)
results
1 2014 1 20 445 2 1 0
2 2014 2 13 40 5 0 0
3 2013 3 10 456 1 0 0
4 2014 3 31 38 2 0 0
5 2014 4 24 23 5 0 0
6 2015 2 11 42 4 0 0
$ Year : int 2014 2014 2013 2014 2014 2015 2013 2014 2013 2015 ...
str(rentaldata);
results
$ Year : int 2014 2014 2013 2014 2014 2015 2013 2014 2013 2015 ...
Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.
Next steps
In part two of this tutorial series, you learned how to:
To create a machine learning model that uses data from the TutorialDB database, follow
part three of this tutorial series:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part three of this four-part tutorial series, you'll train a predictive model in R. In the
next part of this series, you'll deploy this model in an Azure SQL Managed Instance
database with Machine Learning Services.
In part two, you learned how to load the data from a database into a Python data frame
and prepare the data in R.
In part four, you'll learn how to store the model in a database, and then create stored
procedures from the Python scripts you developed in parts two and three. The stored
procedures will run in on the server to make predictions based on new data.
Prerequisites
Part three of this tutorial series assumes you have fulfilled the prerequisites of part one,
and completed the steps in part two.
# one for training the model and the other for validating it
#Use the RentalCount column to check the quality of the prediction against
actual values
#Model 2: Use rpart to create a decision tree model, trained with the
training data set
library(rpart);
#Use both models to make predictions using the test data set.
#To verify it worked, look at the top rows of the two prediction data sets.
head(predict_lm);
head(predict_rpart);
results
1 27.45858 42 2 11 4 0 0
2 387.29344 360 3 29 1 0 0
3 16.37349 20 4 22 4 0 0
4 31.07058 42 3 6 6 0 0
5 463.97263 405 2 28 7 1 0
6 102.21695 38 1 12 2 1 0
1 40.0000 42 2 11 4 0 0
2 332.5714 360 3 29 1 0 0
3 27.7500 20 4 22 4 0 0
4 34.2500 42 3 6 6 0 0
5 645.7059 405 2 28 7 1 0
6 40.0000 38 1 12 2 1 0
It looks like the decision tree model is the more accurate of the two models.
Clean up resources
If you're not going to continue with this tutorial, delete the TutorialDB database.
Next steps
In part three of this tutorial series, you learned how to:
To deploy the machine learning model you've created, follow part four of this tutorial
series:
Deploy a predictive model in R with SQL machine learning
Tutorial: Deploy a predictive model in R
with SQL machine learning
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part four of this four-part tutorial series, you'll deploy a machine learning model
developed in R into Azure SQL Managed Instance using Machine Learning Services.
In part two, you learned how to import a sample database and then prepare the data to
be used for training a predictive model in R.
In part three, you learned how to create and train multiple machine learning models in
R, and then choose the most accurate one.
Prerequisites
Part four of this tutorial assumes you fulfilled the prerequisites of part one and
completed the steps in part two and part three.
SQL
USE [TutorialDB]
GO
AS
BEGIN
, @script = N'
#Create a dtree model and train it using the training data set
library(rpart);
'
, @input_data_1 = N'
SELECT RentalCount
, Year
, Month
, Day
, WeekDay
, Snow
, Holiday
FROM dbo.rental_data
'
, @input_data_1_name = N'rental_train_data'
END;
GO
SQL
USE TutorialDB;
GO
);
GO
2. Save the model to the table as a binary object, with the model name "DTree".
SQL
model_name
, model
VALUES (
'DTree'
, @model
);
SELECT *
FROM rental_models;
SQL
-- Stored procedure that takes model name and new data as input parameters
and predicts the rental count for the new data
USE [TutorialDB]
GO
@model_name VARCHAR(100)
, @input_query NVARCHAR(MAX)
AS
BEGIN
SELECT model
FROM rental_models
);
, @script = N'
'
, @input_data_1 = @input_query
, @input_data_1_name = N'rentals'
, @output_data_1_name = N'rental_predictions'
, @model = @model
END;
GO
SQL
-- Use the predict_rentalcount_new stored procedure with the model name and
a set of features to predict the rental count
, @input_query = '
, CONVERT(INT, 4) AS WeekDay
, CONVERT(INT, 1) AS Snow
, CONVERT(INT, 1) AS Holiday
';
GO
results
RentalCount_Predicted
332.571428571429
You have successfully created, trained, and deployed a model in a database. You then
used that model in a stored procedure to predict values based on new data.
Clean up resources
When you've finished using the TutorialDB database, delete it from your server.
Next steps
In part four of this tutorial series, you learned how to:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this four-part tutorial series, you'll use R to develop and deploy a K-Means clustering
model in Azure SQL Managed Instance Machine Learning Services to cluster customer
data.
In part one of this series, you'll set up the prerequisites for the tutorial and then restore
a sample dataset to a database.
In parts two and three, you'll develop some R scripts in
an Azure Data Studio notebook to analyze and prepare this sample data and train a
machine learning model. Then, in part four, you'll run those R scripts inside a database
using stored procedures.
Clustering can be explained as organizing data into groups where members of a group
are similar in some way. For this tutorial series, imagine you own a retail business. You'll
use the K-Means algorithm to perform the clustering of customers in a dataset of
product purchases and returns. By clustering customers, you can focus your marketing
efforts more effectively by targeting specific groups. K-Means clustering is an
unsupervised learning algorithm that looks for patterns in data based on similarities.
In part two, you'll learn how to prepare the data from a database to perform clustering.
In part three, you'll learn how to create and train a K-Means clustering model in R.
In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in R based on new data.
Prerequisites
Azure SQL Managed Instance Machine Learning Services. For information, see the
Azure SQL Managed Instance Machine Learning Services overview.
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
Azure Data Studio. You'll use a notebook in Azure Data Studio for SQL. For more
information about notebooks, see How to use notebooks in Azure Data Studio.
RODBC - This driver is used in the R scripts you'll develop in this tutorial. If it's not
already installed, install it using the R command install.packages("RODBC") . For
more information on RODBC, see CRAN - Package RODBC .
3. You can verify that the dataset exists after you have restored the database by
querying the dbo.customer table:
SQL
USE tpcxbb_1gb;
Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part one of this tutorial series, you completed these steps:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part two of this four-part tutorial series, you'll prepare the data from a database to
perform clustering in R with Azure SQL Managed Instance Machine Learning Services.
In part one, you installed the prerequisites and restored the sample database.
In part three, you'll learn how to create and train a K-Means clustering model in R.
In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in R based on new data.
Prerequisites
Part two of this tutorial assumes you have completed part one.
Separate customers
Create a new RScript file in RStudio and run the following script.
In the SQL query, you're
separating customers along the following dimensions:
orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency
In the connStr function, replace ServerName with your own connection information.
R
WHEN (
(orders_count = 0)
OR (returns_count IS NULL)
OR (orders_count IS NULL)
THEN 0.0
END, 7) AS orderRatio
,round(CASE
WHEN (
(orders_items = 0)
OR (returns_items IS NULL)
OR (orders_items IS NULL)
THEN 0.0
END, 7) AS itemsRatio
,round(CASE
WHEN (
(orders_money = 0)
OR (returns_money IS NULL)
OR (orders_money IS NULL)
THEN 0.0
END, 7) AS monetaryRatio
,round(CASE
THEN 0.0
ELSE returns_count
END, 0) AS frequency
FROM (
SELECT ss_customer_sk,
COUNT(ss_item_sk) AS orders_items,
SUM(ss_net_paid) AS orders_money
FROM store_sales s
GROUP BY ss_customer_sk
) orders
SELECT sr_customer_sk,
COUNT(sr_item_sk) AS returns_items,
SUM(sr_return_amt) AS returns_money
FROM store_returns
GROUP BY sr_customer_sk
library(RODBC)
ch <- odbcDriverConnect(connStr)
head(customer_data, n = 5);
results
1 29727 0 0 0.000000 0
2 26429 0 0 0.041979 1
3 60053 0 0 0.065762 3
4 97643 0 0 0.037034 3
5 32549 0 0 0.031281 4
Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part two of this tutorial series, you learned how to:
To create a machine learning model that uses this customer data, follow part three of
this tutorial series:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part three of this four-part tutorial series, you'll build a K-Means model in R to
perform clustering. In the next part of this series, you'll deploy this model in a database
with Azure SQL Managed Instance Machine Learning Services.
In part one, you installed the prerequisites and restored the sample database.
In part two, you learned how to prepare the data from a database to perform clustering.
In part four, you'll learn how to create a stored procedure in a database that can
perform clustering in R based on new data.
Prerequisites
Part three of this tutorial series assumes you have fulfilled the prerequisites of part
one and completed the steps in part two.
The algorithm accepts two inputs: The data itself, and a predefined number "k"
representing the number of clusters to generate.
The output is k clusters with the input
data partitioned among the clusters.
To determine the number of clusters for the algorithm to use, use a plot of the within
groups sum of squares, by number of clusters extracted. The appropriate number of
clusters to use is at the bend or "elbow" of the plot.
R
for (i in 2:20)
Based on the graph, it looks like k = 4 would be a good value to try. That k value will
group the customers into four clusters.
Perform clustering
In the following R script, you'll use the function kmeans to perform clustering.
# called return_cluster
customer_cluster <-
data.frame(cluster=clust$cluster,customer=customer_data$customer,orderRatio=
customer_data$orderRatio,
itemsRatio=customer_data$itemsRatio,monetaryRatio=customer_data$monetaryRati
o,frequency=customer_data$frequency)
head(customer_cluster_check)
clust[-1]
results
$centers
$totss
[1] 40191.83
$withinss
$tot.withinss
[1] 20744.29
$betweenss
[1] 19447.54
$size
$iter
[1] 3
$ifault
[1] 0
The four cluster means are given using the variables defined in part two:
orderRatio = return order ratio (total number of orders partially or fully returned
versus the total number of orders)
itemsRatio = return item ratio (total number of items returned versus the number
of items purchased)
monetaryRatio = return amount ratio (total monetary amount of items returned
versus the amount purchased)
frequency = return frequency
Data mining using K-Means often requires further analysis of the results, and further
steps to better understand each cluster, but it can provide some good leads.
Here are a
couple ways you could interpret these results:
Cluster 1 (the largest cluster) seems to be a group of customers that are not active
(all values are zero).
Cluster 3 seems to be a group that stands out in terms of return behavior.
Clean up resources
If you're not going to continue with this tutorial, delete the tpcxbb_1gb database.
Next steps
In part three of this tutorial series, you learned how to:
To deploy the machine learning model you've created, follow part four of this tutorial
series:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part four of this four-part tutorial series, you'll deploy a clustering model, developed
in R, into a database using Azure SQL Managed Instance Machine Learning Services.
In order to perform clustering on a regular basis, as new customers are registering, you
need to be able call the R script from any app. To do that, you can deploy the R script in
a database by putting the R script inside a SQL stored procedure. Because your model
executes in the database, it can easily be trained against data stored in the database.
In part one, you installed the prerequisites and restored the sample database.
In part two, you learned how to prepare the data from a database to perform clustering.
In part three, you learned how to create and train a K-Means clustering model in R.
Prerequisites
Part four of this tutorial series assumes you have fulfilled the prerequisites of part
one and completed the steps in part two and part three.
SQL
USE [tpcxbb_1gb]
GO
AS
/*
*/
BEGIN
round(CASE
WHEN (
(orders_count = 0)
OR (returns_count IS NULL)
OR (orders_count IS NULL)
THEN 0.0
END, 7) AS orderRatio,
round(CASE
WHEN (
(orders_items = 0)
OR (returns_items IS NULL)
OR (orders_items IS NULL)
THEN 0.0
END, 7) AS itemsRatio,
round(CASE
WHEN (
(orders_money = 0)
OR (returns_money IS NULL)
OR (orders_money IS NULL)
THEN 0.0
END, 7) AS monetaryRatio,
round(CASE
THEN 0.0
ELSE returns_count
END, 0) AS frequency
FROM (
SELECT ss_customer_sk,
COUNT(ss_item_sk) AS orders_items,
SUM(ss_net_paid) AS orders_money
FROM store_sales s
GROUP BY ss_customer_sk
) orders
SELECT sr_customer_sk,
COUNT(sr_item_sk) AS returns_items,
SUM(sr_return_amt) AS returns_money
FROM store_returns
GROUP BY sr_customer_sk
'
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
sep="" )
library(RODBC)
ch <- odbcDriverConnect(connStr);
sqlDrop(ch, "customer_return_clusters")
customer_cluster <-
data.frame(cluster=clust$cluster,customer=customer_data$customer,orderRatio=
customer_data$orderRatio,
itemsRatio=customer_data$itemsRatio,monetaryRatio=customer_data$monetaryRati
o,frequency=customer_data$frequency)
## clean up
odbcClose(ch)
'
, @input_data_1 = N''
, @instance_name = @instance_name
, @database_name = @database_name
, @input_query = @input_query
, @duration = @duration OUTPUT;
END;
GO
Perform clustering
Now that you've created the stored procedure, execute the following script to perform
clustering.
SQL
EXECUTE [dbo].[generate_customer_return_clusters];
Verify that it works and that we actually have the list of customers and their cluster
mappings.
SQL
FROM customer_return_clusters;
result
1 29727 0 0 0 0
4 26429 0 0 0.041979 1
2 60053 0 0 0.065762 3
2 97643 0 0 0.037034 3
2 32549 0 0 0.031281 4
Suppose you want to send a promotional email to customers in cluster 0, the group that
was inactive (you can see how the four clusters were described in part three of this
tutorial). The following code selects the email addresses of customers in cluster 0.
SQL
USE [tpcxbb_1gb]
FROM dbo.customer
JOIN
[dbo].[customer_clusters] as c
ON c.Customer = customer.c_customer_sk
WHERE c.cluster = 0
You can change the c.cluster value to return email addresses for customers in other
clusters.
Clean up resources
When you're finished with this tutorial, you can delete the tpcxbb_1gb database.
Next steps
In part four of this tutorial series, you learned how to:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In this five-part tutorial series for SQL programmers, you'll learn about R integration in
Machine Learning Services in Azure SQL Managed Instance.
You'll build and deploy an R-based machine learning solution using a sample database
on SQL Server. You'll use T-SQL, Azure Data Studio or SQL Server Management Studio,
and a database engine instance with SQL machine learning and R language support
This tutorial series introduces you to R functions used in a data modeling workflow.
Parts include data exploration, building and training a binary classification model, and
model deployment. You'll use sample data from the New York City Taxi and Limousine
Commission. The model you'll build predicts whether a trip is likely to result in a tip
based on the time of day, distance traveled, and pick-up location.
In the first part of this series, you'll install the prerequisites and restore the sample
database. In parts two and three, you'll develop some R scripts to prepare your data and
train a machine learning model. Then, in parts four and five, you'll run those R scripts
inside the database using T-SQL stored procedures.
" Install prerequisites
" Restore the sample database
In part two, you'll explore the sample data and generate some plots.
In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
7 Note
This tutorial is available in both R and Python. For the Python version, see Python
tutorial: Predict NYC taxi fares with binary classification.
Prerequisites
Install R libraries
All tasks can be done using Transact-SQL stored procedures in Azure Data Studio or
Management Studio.
This tutorial assumes familiarity with basic database operations such as creating
databases and tables, importing data, and writing SQL queries. It does not assume you
know R and all R code is provided.
Development and testing of the actual code is best performed using a dedicated R
development environment. However, after the script is fully tested, you can easily deploy
it to SQL Server using Transact-SQL stored procedures in the familiar environment of
Azure Data Studio or Management Studio. Wrapping external code in stored procedures
is the primary mechanism for operationalizing code in SQL Server.
After the model has been saved to the database, you can call the model for predictions
from Transact-SQL by using stored procedures.
Whether you're a SQL programmer new to R, or an R developer new to SQL, this five-
part tutorial series introduces a typical workflow for conducting in-database analytics
with R and SQL Server.
Next steps
In this article, you:
" Installed prerequisites
" Restored the sample database
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part two of this five-part tutorial series, you'll explore the sample data and generate
some plots. Later, you'll learn how to serialize graphics objects in Python, and then
deserialize those objects and make plots.
In part two of this five-part tutorial series, you'll review the sample data and then
generate some plots using the generic barplot and hist functions in base R.
A key objective of this article is showing how to call R functions from Transact-SQL in
stored procedures and save the results in application file formats:
7 Note
Because visualization is such a powerful tool for understanding data shape and
distribution, R provides a range of functions and packages for generating
histograms, scatter plots, box plots, and other data exploration graphs. R typically
creates images using an R device for graphical output, which you can capture and
store as a varbinary data type for rendering in application. You can also save the
images to any of the support file formats (.JPG, .PDF, etc.).
In part one, you installed the prerequisites and restored the sample database.
In part three, you'll learn how to create features from raw data by using a Transact-SQL
function. You'll then call that function from a stored procedure to create a table that
contains the feature values.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
In the original public dataset, the taxi identifiers and trip records were provided in
separate files. However, to make the sample data easier to use, the two original datasets
have been joined on the columns medallion, hack_license, and pickup_datetime. The
records were also sampled to get just 1% of the original number of records. The
resulting down-sampled dataset has 1,703,957 rows and 23 columns.
Taxi identifiers
The hack_license column contains the taxi driver's license number (anonymized).
Each trip record includes the pickup and drop-off location and time, and the trip
distance.
Each fare record includes payment information such as the payment type, total
amount of payment, and the tip amount.
The last three columns can be used for various machine learning tasks. The
tip_amount column contains continuous numeric values and can be used as the
label column for regression analysis. The tipped column has only yes/no values and
is used for binary classification. The tip_class column has multiple class labels and
therefore can be used as the label for multi-class classification tasks.
This walkthrough demonstrates only the binary classification task; you are welcome
to try building models for the other two machine learning tasks, regression and
multiclass classification.
The values used for the label columns are all based on the tip_amount column,
using these business rules:
Derived column name Rule
2. Paste in the following script to create a stored procedure that plots the histogram.
This example is named RPlotHistogram.
SQL
AS
BEGIN
image_file = tempfile();
jpeg(filename = image_file);
#Plot histogram
dev.off();
',
@input_data_1 = @query
END
GO
The variable @query defines the query text ( 'SELECT tipped FROM nyctaxi_sample' ),
which is passed to the R script as the argument to the script input variable,
@input_data_1 . For R scripts that run as external processes, you should have a one-
to-one mapping between inputs to your script, and inputs to the
sp_execute_external_script system stored procedure that starts the R session on
SQL Server.
The R device is set to off because you are running this command as an external
script in SQL Server. Typically in R, when you issue a high-level plotting command,
R opens a graphics window, called a device. You can turn the device off if you are
writing to a file or handling the output some other way.
SQL
EXEC [dbo].[RPlotHistogram]
Results
plot
0xFFD8FFE000104A4649...
2. Open a PowerShell command prompt and run the following command, providing
the appropriate instance name, database name, username, and credentials as
arguments. For those using Windows identities, you can replace -U and -P with -T.
PowerShell
7 Note
Press ENTER at each prompt to accept the defaults, except for these changes:
Type Y if you want to save the output parameters for later reuse.
text
Results
text
Starting copy...
1 rows copied.
Tip
If you save the format information to file (bcp.fmt), the bcp utility generates a
format definition that you can apply to similar commands in future without
being prompted for graphic file format options. To use the format file, add -f
bcp.fmt to the end of any command line, after the password argument.
4. The output file will be created in the same directory where you ran the PowerShell
command. To view the plot, just open the file plot.jpg.
This stored procedure uses the hist function to create the histogram, exporting the
binary data to popular formats such as .JPG, .PDF, and .PNG.
2. Paste in the following script to create a stored procedure that plots the histogram.
This example is named RPlotHist .
SQL
AS
BEGIN
# Set output directory for files and check for existing files with
same names
setwd(mainDir);
print(dest_filename, quote=FALSE);
jpeg(filename=dest_filename);
dev.off();
# Open a pdf file and output histograms of tip amount and fare
amount.
dest_filename = tempfile(pattern =
''rHistograms_Tip_and_Fare_Amount_'', tmpdir = mainDir)
print(dest_filename, quote=FALSE);
par(mfrow=c(1,2));
ylab = ''Counts'',
ylab = ''Counts'',
main = ''Histogram,
Fare amount'',
dev.off();
# Open a pdf file and output an xyplot of tip amount vs. fare
amount using lattice;
dest_filename = tempfile(pattern =
''rXYPlots_Tip_vs_Fare_Amount_'', tmpdir = mainDir)
print(dest_filename, quote=FALSE);
plot(tip_amount ~ fare_amount,
ylim = c(0,50),
xlim = c(0,150),
cex=.5,
pch=19,
col=''darkgreen'',
dev.off();',
@input_data_1 = @query
END
The output of the SELECT query within the stored procedure is stored in the
default R data frame, InputDataSet . Various R plotting functions can then be called
to generate the actual graphics files. Most of the embedded R script represents
options for these graphics functions, such as plot or hist .
The R device is set to off because you are running this command as an external
script in SQL Server. Typically in R, when you issue a high-level plotting command,
R opens a graphics window, called a device. You can turn the device off if you are
writing to a file or handling the output some other way.
All files are saved to the local folder C:\temp\Plots. The destination folder is
defined by the arguments provided to the R script as part of the stored procedure.
To output the files to a different folder, change the value of the mainDir variable in
the R script embedded in the stored procedure. You can also modify the script to
output different formats, more files, and so on.
SQL
EXEC RPlotHist
Results
text
C:\temp\plots\rHistograms_Tip_and_Fare_Amount_1888441e542c.pdf[1]
C:\temp\plots\rXYPlots_Tip_vs_Fare_Amount_18887c9d517b.pdf
The numbers in the file names are randomly generated to ensure that you don't get an
error when trying to write to an existing file.
View output
To view the plot, open the destination folder and review the files that were created by
the R code in the stored procedure.
1. Go the folder indicated in the STDOUT message (in the example, this is
C:\temp\plots)
2. Open rHistogram_Tipped.jpg to show the number of trips that got a tip vs. the
trips that got no tip (this histogram is similar to the one you generated in the
previous step).
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part three of this five-part tutorial series, you'll learn how to create features from raw
data by using a Transact-SQL function. You'll then call that function from a SQL stored
procedure to create a table that contains the feature values.
In part one, you installed the prerequisites and restored the sample database.
In part two, you reviewed the sample data and generated some plots.
In part four, you'll load the modules and call the necessary functions to create and train
the model using a SQL Server stored procedure.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
In this dataset, the distance values are based on the reported meter distance, and don't
necessarily represent geographical distance or the actual distance traveled. Therefore,
you'll need to calculate the direct distance between the pick-up and drop-off points, by
using the coordinates available in the source NYC Taxi dataset. You can do this by using
the Haversine formula in a custom Transact-SQL function.
You'll use one custom T-SQL function, fnCalculateDistance, to compute the distance
using the Haversine formula, and use a second custom T-SQL function,
fnEngineerFeatures, to create a table containing all the features.
The overall process is as follows:
SQL
RETURNS float
AS
BEGIN
-- Convert to radians
-- Calculate distance
--Convert to miles
IF @distance <> 0
BEGIN
END
RETURN @distance
END
GO
It takes latitude and longitude values as inputs, obtained from trip pick-up
and drop-off locations. The Haversine formula converts locations to radians
and uses those values to compute the direct distance in miles between those
two locations.
1. Take a minute to review the code for the custom T-SQL function,
fnEngineerFeatures, which should have been created for you as part of the
preparation for this walkthrough.
SQL
@passenger_count int = 0,
@trip_distance float = 0,
@trip_time_in_secs int = 0,
@pickup_latitude float = 0,
@pickup_longitude float = 0,
@dropoff_latitude float = 0,
@dropoff_longitude float = 0)
RETURNS TABLE
AS
RETURN
SELECT
@passenger_count AS passenger_count,
@trip_distance AS trip_distance,
@trip_time_in_secs AS trip_time_in_secs,
[dbo].[fnCalculateDistance](@pickup_latitude, @pickup_longitude,
@dropoff_latitude, @dropoff_longitude) AS direct_distance
GO
This table-valued function that takes multiple columns as inputs, and outputs
a table with multiple feature columns.
The purpose of this function is to create new features for use in building a
model.
2. To verify that this function works, use it to calculate the geographical distance for
those trips where the metered distance was 0 but the pick-up and drop-off
locations were different.
SQL
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) AS direct_distance
FROM nyctaxi_sample
As you can see, the distance reported by the meter doesn't always correspond to
geographical distance. This is why feature engineering is so important. You can use
these improved data features to train a machine learning model using R.
Next steps
In this article, you:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part four of this five-part tutorial series, you'll learn how to train a machine learning
model by using R. You'll train the model using the data features you created in the
previous part, and then save the trained model in a SQL Server table. In this case, the R
packages are already installed with R Services (In-Database), so everything can be done
from SQL.
In part one, you installed the prerequisites and restored the sample database.
In part two, you reviewed the sample data and generate some plots.
In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.
In part five, you'll learn how to operationalize the models that you trained and saved in
part four.
SQL
CREATE PROCEDURE [dbo].[RTrainLogitModel] (@trained_model
varbinary(max) OUTPUT)
AS
BEGIN
pickup_datetime, dropoff_datetime,
dbo.fnCalculateDistance(pickup_latitude, pickup_longitude,
dropoff_latitude, dropoff_longitude) as direct_distance
from nyctaxi_sample
'
@script = N'
## Create model
summary(logitObj)
## Serialize model
',
@input_data_1 = @inquery,
END
GO
To ensure that some data is left over to test the model, 70% of the data are
randomly selected from the taxi data table for training purposes.
The R script calls the R function glm to create the logistic regression model.
The binary variable tipped is used as the label or outcome column, and the
model is fit using these feature columns: passenger_count, trip_distance,
trip_time_in_secs, and direct_distance.
1. To train and deploy the R model, call the stored procedure and insert it into the
database table nyc_taxi_models, so that you can use it for future predictions:
SQL
2. Watch the Messages window of Management Studio for messages that would be
piped to R's stdout stream, like this message:
"STDOUT message(s) from external script: Rows Read: 1193025, Total Rows
Processed: 1193025, Total Chunk Time: 0.093 seconds"
3. When the statement has completed, open the table nyc_taxi_models. Processing of
the data and fitting the model might take a while.
You can see that one new row has been added, which contains the serialized
model in the column model and the model name RTrainLogit_model in the column
name.
text
model name
---------------------------- ------------------
0x580A00000002000302020.... RTrainLogit_model
In the next part of this tutorial you'll use the trained model to generate predictions.
Next steps
In this article, you:
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
In part five of this five-part tutorial series, you'll learn to operationalize the model that
you trained and saved in the previous part by using the model to predict potential
outcomes. The model is wrapped in a stored procedure which can be called directly by
other applications.
Batch scoring mode: Use a SELECT query as an input to the stored procedure. The
stored procedure returns a table of observations corresponding to the input cases.
Individual scoring mode: Pass a set of individual parameter values as input. The
stored procedure returns a single row or value.
In part one, you installed the prerequisites and restored the sample database.
In part two, you reviewed the sample data and generated some plots.
In part three, you learned how to create features from raw data by using a Transact-SQL
function. You then called that function from a stored procedure to create a table that
contains the feature values.
In part four, you loaded the modules and called the necessary functions to create and
train the model using a SQL Server stored procedure.
Basic scoring
The stored procedure RPredict illustrates the basic syntax for wrapping a PREDICT call in
a stored procedure.
SQL
CREATE PROCEDURE [dbo].[RPredict] (@model varchar(250), @inquery
nvarchar(max))
AS
BEGIN
@script = N'
print(summary(mod))
str(OutputDataSet)
print(OutputDataSet)
',
@input_data_1 = @inquery,
@model = @lmodel2
END
GO
The SELECT statement gets the serialized model from the database, and stores the
model in the R variable mod for further processing using R.
The new cases for scoring are obtained from the Transact-SQL query specified in
@inquery , the first parameter to the stored procedure. As the query data is read,
the rows are saved in the default data frame, InputDataSet . This data frame is
passed to the PREDICT function which generates the scores.
Because a data.frame can contain a single row, you can use the same code for
batch or single scoring.
The value returned by the PREDICT function is a float that represents the
probability that the driver gets a tip of any amount.
1. Start by getting a smaller set of input data to work with. This query creates a "top
10" list of trips with passenger count and other features needed to make a
prediction.
SQL
AND a.pickup_datetime=b.pickup_datetime
Sample results
text
SQL
BEGIN
EXEC sp_execute_external_script
@language = N'R',
@script = N'
print(summary(mod))
str(OutputDataSet)
print(OutputDataSet)
',
@input_data_1 = @inquery,
@model = @lmodel2
END
3. Provide the query text in a variable and pass it as a parameter to the stored
procedure:
SQL
-- Call the stored procedure for scoring and pass the input data
The stored procedure returns a series of values representing the prediction for each of
the top 10 trips. However, the top trips are also single-passenger trips with a relatively
short trip distance, for which the driver is unlikely to get a tip.
Tip
Rather than returning just the "yes-tip" and "no-tip" results, you could also return
the probability score for the prediction, and then apply a WHERE clause to the
Score column values to categorize the score as "likely to tip" or "unlikely to tip",
using a threshold value such as 0.5 or 0.7. This step is not included in the stored
procedure but it would be easy to implement.
Single-row scoring of multiple inputs
Sometimes you want to pass in multiple input values and get a single prediction based
on those values. For example, you could set up an Excel worksheet, web application, or
Reporting Services report to call the stored procedure and provide inputs typed or
selected by users from those applications.
In this section, you learn how to create single predictions using a stored procedure that
takes multiple inputs, such as passenger count, trip distance, and so forth. The stored
procedure creates a score based on the previously stored R model.
If you call the stored procedure from an external application, make sure that the data
matches the requirements of the R model. This might include ensuring that the input
data can be cast or converted to an R data type, or validating data type and data length.
SQL
AS
BEGIN
EXEC sp_execute_external_script
@language = N'R',
@script = N'
print(summary(mod));
str(OutputDataSet);
print(OutputDataSet);
',
@input_data_1 = @inquery,
END
Open a new Query window, and call the stored procedure, providing values for
each of the parameters. The parameters represent feature columns used by the
model and are required.
SQL
@passenger_count = 1,
@trip_distance = 2.5,
@trip_time_in_secs = 631,
@pickup_latitude = 40.763958,
@pickup_longitude = -73.973373,
@dropoff_latitude = 40.782139,
@dropoff_longitude = -73.977303
Or, use this shorter form supported for parameters to a stored procedure:
SQL
3. The results indicate that the probability of getting a tip is low (zero) on these top
10 trips, since all are single-passenger trips over a relatively short distance.
Conclusions
Now that you have learned to embed R code in stored procedures, you can extend
these practices to build models of your own. The integration with Transact-SQL makes it
much easier to deploy R models for prediction and to incorporate model retraining as
part of an enterprise data workflow.
Next steps
In this article, you:
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
This article describes how to plot data using the Python package pandas'.hist() . A SQL
database is the source used to visualize the histogram data intervals that have
consecutive, non-overlapping values.
Prerequisites
Azure SQL Managed Instance
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
SQL
USE AdventureWorksDW;
pyodbc
pandas
sqlalchemy
matplotlib
To install these packages:
Plot histogram
The distributed data displayed in the histogram is based on a SQL query from
AdventureWorksDW . The histogram visualizes data and the frequency of data values.
Edit the connection string variables: 'server', 'database', 'username', and 'password' to
connect to SQL Server database.
Python
import pyodbc
import pandas as pd
import matplotlib
import sqlalchemy
matplotlib.use('TkAgg', force=True)
server = 'servername'
database = 'AdventureWorksDW2019'
username = 'yourusername'
password = 'databasename'
url = 'mssql+pyodbc://{user}:{passwd}@{host}:{port}/{db}?
driver=SQL+Server'.format(user=username, passwd=password, host=server,
port=port, db=database)
engine = create_engine(url)
df = pd.read_sql(sql, engine)
df.hist(bins=50)
plt.show()
The display shows the age distribution of customers in the FactInternetSales table.
Insert data from a SQL table into a
Python pandas dataframe
Article • 02/28/2023
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
This article describes how to insert SQL data into a pandas dataframe using the
pyodbc package in Python. The rows and columns of data contained within the
dataframe can be used for further data exploration.
Prerequisites
Azure SQL Managed Instance
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
SQL
USE AdventureWorks;
pyodbc
pandas
Insert data
Use the following script to select data from Person.CountryRegion table and insert into a
dataframe. Edit the connection string variables: 'server', 'database', 'username', and
'password' to connect to SQL.
Python
import pyodbc
import pandas as pd
server = 'servername'
database = 'AdventureWorks'
username = 'yourusername'
password = 'databasename'
cnxn = pyodbc.connect('DRIVER={SQL
Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+
password)
cursor = cnxn.cursor()
df = pd.read_sql(query, cnxn)
print(df.head(26))
Output
The print command in the preceding script displays the rows of data from the pandas
dataframe df .
text
CountryRegionCode Name
0 AF Afghanistan
1 AL Albania
2 DZ Algeria
3 AS American Samoa
4 AD Andorra
5 AO Angola
6 AI Anguilla
7 AQ Antarctica
9 AR Argentina
10 AM Armenia
11 AW Aruba
12 AU Australia
13 AT Austria
14 AZ Azerbaijan
15 BS Bahamas, The
16 BH Bahrain
17 BD Bangladesh
18 BB Barbados
19 BY Belarus
20 BE Belgium
21 BZ Belize
22 BJ Benin
23 BM Bermuda
24 BT Bhutan
25 BO Bolivia
Next steps
Insert Python dataframe into SQL
Insert Python dataframe into SQL table
Article • 02/28/2023
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
This article describes how to insert a pandas dataframe into a SQL database using the
pyodbc package in Python.
Prerequisites
Azure SQL Managed Instance
SQL Server Management Studio for restoring the sample database to Azure SQL
Managed Instance.
Azure Data Studio. To install, see Download and install Azure Data Studio.
Follow the steps in AdventureWorks sample databases to restore the OLTP version
of the AdventureWorks sample database for your version of SQL Server.
You can verify that the database was restored correctly by querying the
HumanResources.Department table:
SQL
USE AdventureWorks;
4. For each of the following packages, enter the package name, click Search, then
click Install.
pyodbc
pandas
text
DepartmentID,Name,GroupName,
5,Purchasing,Inventory Management,
7,Production,Manufacturing,
8,Production Control,Manufacturing,
SQL
GO
2. Paste the following code into a code cell, updating the code with the correct values
for server , database , username , password , and the location of the CSV file.
Python
import pyodbc
import pandas as pd
# working directory for csv file: type "pwd" in Azure Data Studio or
Linux
df = pd.read_csv("c:\\user\\username\department.csv")
server = 'yourservername'
database = 'AdventureWorks'
username = 'username'
password = 'yourpassword'
cnxn = pyodbc.connect('DRIVER={SQL
Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+
password)
cursor = cnxn.cursor()
cnxn.commit()
cursor.close()
SQL
Results
Bash
16
Next steps
Plot a histogram for data exploration with Python
Data type mappings between Python
and SQL Server
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
This article lists the supported data types, and the data type conversions performed,
when using the Python integration feature in SQL Server Machine Learning Services.
bigint float64
binary bytes
bit bool
char str
date datetime
datetime datetime Supported with SQL Server 2017 CU6 and above (with NumPy
arrays of type datetime.datetime or Pandas pandas.Timestamp ).
sp_execute_external_script now supports datetime types with
fractional seconds.
float float64
nchar str
nvarchar str
nvarchar(max) str
SQL type Python Description
type
real float64
smalldatetime datetime
smallint int32
tinyint int32
uniqueidentifier str
varbinary bytes
varbinary(max) bytes
varchar(n) str
varchar(max) str
See also
Data type mappings between R and SQL Server
Data type mappings between R and SQL
Server
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
This article lists the supported data types, and the data type conversions performed,
when using the R integration feature in SQL Server Machine Learning Services.
Base R version
SQL Server 2016 R Services and SQL Server Machine Learning Services with R are
aligned with specific releases of Microsoft R Open. For example, the latest release, SQL
Server 2019 Machine Learning Services, is built on Microsoft R Open 3.5.2.
To view the R version associated with a particular instance of SQL Server, open RGui in
the SQL instance. For example, the path for the default instance in SQL Server 2019
would be: C:\Program Files\Microsoft SQL
Server\MSSQL15.MSSQLSERVER\R_SERVICES\bin\x64\Rgui.exe .
The tool loads base R and other libraries. Package version information is provided in a
notification for each package that is loaded at session start up.
This section lists the implicit conversions that are provided, and lists unsupported data
types. Some guidance is provided for mapping data types between R and SQL Server.
binary(n)
raw varbinary(max) Only allowed as input parameter and output
n <= 8000
char(n)
character varchar(max) The input data frame (input_data_1) are created
n <= 8000 without explicitly setting of stringsAsFactors
parameter so the column type will depend on
the default.stringsAsFactors() in R
varbinary(n)
raw varbinary(max) Only allowed as input parameter and output
n <= 8000
varchar(n)
character varchar(max) The input data frame (input_data_1) are created
n <= 8000 without explicitly setting of stringsAsFactors
parameter so the column type will depend on
the default.stringsAsFactors() in R
Data types listed in the Other section of the SQL type system article: cursor,
timestamp, hierarchyid, uniqueidentifier, sql_variant, xml, table
All spatial types
image
For more information about SQL Server data types, see Data Types (Transact-SQL)
These improvements are all available by default when you use a database compatibility
level of 130 or later. However, if you use a different compatibility level, or connect to a
database using an older version, you might see differences in the precision of numbers
or other results.
For more information, see SQL Server 2016 improvements in handling some data types
and uncommon operations .
When retrieving data from a database for use in R code, you should always eliminate
columns that cannot be used in R, as well as columns that are not useful for analysis,
such as GUIDS (uniqueidentifier), timestamps and other columns used for auditing, or
lineage information created by ETL processes.
Note that inclusion of unnecessary columns can greatly reduce the performance of R
code, especially if high cardinality columns are used as factors. Therefore, we
recommend that you use SQL Server system stored procedures and information views to
get the data types for a given table in advance, and eliminate or convert incompatible
columns. For more information, see Information Schema Views in Transact-SQL
If a particular SQL Server data type is not supported by R, but you need to use the
columns of data in the R script, we recommend that you use the CAST and CONVERT
(Transact-SQL) functions to ensure that the data type conversions are performed as
intended before using the data in your R script.
2 Warning
If you use the rxDataStep to drop incompatible columns while moving data, be
aware that the arguments varsToKeep and varsToDrop are not supported for the
RxSqlServerData data source type.
Examples
The query gets a series of values from a SQL Server table, and uses the stored procedure
sp_execute_external_script to output the values using the R runtime.
SQL
c1 int,
c2 varchar(10),
c3 uniqueidentifier
);
go
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
, @input_data_1_name = N'inputDataSet'
, @output_data_1_name = N'outputDataSet'
Results
Row # C1 C2 C3 C4
1 1 Hello 6e225611-4b58-4995-a0a5-554d19012ef1 4
Note the use of the str function in R to get the schema of the output data. This
function returns the following information:
Output
$ cR: num 4 2
From this, you can see that the following data type conversions were implicitly
performed as part of this query:
Column C1. The column is represented as int in SQL Server, integer in R, and int in
the output result set.
Note how the output changes; any string from R (either a factor or a regular string)
will be represented as varchar(max), no matter what the length of the strings is.
Note the data type conversion that happens. SQL Server supports the
uniqueidentifier but R does not; therefore, the identifiers are represented as
strings.
Column C4. The column contains values generated by the R script and not present
in the original data.
Example 2: Dynamic column selection using R
The following example shows how you can use R code to check for invalid column types.
The gets the schema of a specified table using the SQL Server system views, and
removes any columns that have a specified invalid type.
See also
Data type mappings between Python and SQL Server
Python Tutorial: Deploy a linear
regression model with SQL machine
learning
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
In part four of this four-part tutorial series, you'll deploy a linear regression model
developed in Python into an Azure SQL Managed Instance database using Machine
Learning Services.
In part two, you learned how to load the data from a database into a Python data frame,
and prepare the data in Python.
In part three, you learned how to train a linear regression machine learning model in
Python.
Prerequisites
Part four of this tutorial assumes you have completed part one and its
prerequisites.
Run the following T-SQL statement in Azure Data Studio to create the stored procedure
to train the model.
SQL
-- Stored procedure that trains and generates a Python model using the
rental_data and a linear regression algorithm
go
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
import pickle
df = rental_train_data
columns = df.columns.tolist()
target = "RentalCount"
lin_model = LinearRegression()
lin_model.fit(df[columns], df[target])
trained_model = pickle.dumps(lin_model)'
, @input_data_1_name = N'rental_train_data'
END;
GO
1. Run the following T-SQL statement in Azure Data Studio to create a table called
dbo.rental_py_models which is used to store the model.
SQL
USE TutorialDB;
GO
);
GO
2. Save the model to the table as a binary object, with the model name linear_model.
SQL
SQL
GO
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import pickle
import pandas
rental_model = pickle.loads(py_model)
df = rental_score_data
columns = df.columns.tolist()
target = "RentalCount"
lin_predictions = rental_model.predict(df[columns])
print(lin_predictions)
# Compute error between the test predictions and the actual values.
#print(lin_mse)
predictions_df = pandas.DataFrame(lin_predictions)
'
, @input_data_1_name = N'rental_score_data'
, @py_model = @py_model
END;
GO
SQL
GO
) ON [PRIMARY]
GO
SQL
--Insert the results of the predictions for test set into a table
You have successfully created, trained, and deployed a model. You then used that model
in a stored procedure to predict values based on new data.
Next steps
In part four of this tutorial series, you completed these steps:
To learn more about using Python with SQL machine learning, see:
Python tutorials
Modify R/Python code to run in SQL
Server (In-Database) instances
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
This article provides high-level guidance on how to modify R or Python code to run as a
SQL Server stored procedure to improve performance when accessing SQL data.
When you move R/Python code from a local IDE or other environment to SQL Server,
the code generally works without further modification. This is especially true for simple
code, such as a function that takes some inputs and returns a value. It's also easier to
port solutions that use the RevoScaleR/revoscalepy packages, which support execution
in different execution contexts with minimal changes. Note that MicrosoftML applies to
SQL Server 2016 (13.x), SQL Server 2017 (14.x), and SQL Server 2019 (15.x), and does not
appear in SQL Server 2022 (16.x).
However, your code might require substantial changes if any of the following apply:
You use libraries that access the network or that cannot be installed on SQL Server.
The code makes separate calls to data sources outside SQL Server, such as Excel
worksheets, files on shares, and other databases.
You want to parameterize the stored procedure and run the code in the @script
parameter of sp_execute_external_script.
Your original solution includes multiple steps that might be more efficient in a
production environment if executed independently, such as data preparation or
feature engineering vs. model training, scoring, or reporting.
You want to optimize performance by changing libraries, using parallel execution,
or offloading some processing to SQL Server.
Packages
Determine which packages are needed and ensure that they work on SQL Server.
Primary data sources are large datasets, such as model training data, or input
data for predictions. Plan to map your largest dataset to the input parameter of
sp_execute_external_script.
Secondary data sources are typically smaller data sets, such as lists of factors, or
additional grouping variables.
Determine the outputs you need. If you run code using sp_execute_external_script,
the stored procedure can output only one data frame as a result. However, you can
also output multiple scalar outputs, including plots and models in binary format, as
well as other scalar values derived from code or SQL parameters.
Data types
For a detailed look at the data type mappings between R/Python and SQL Server, see
these articles:
Take a look at the data types used in your R/Python code and do the following:
All R/Python data types are supported by SQL Server Machine Learning Services.
However, SQL Server supports a greater variety of data types than does R or
Python. Therefore, some implicit data type conversions are performed when
moving SQL Server data to and from your code. You might need to explicitly cast
or convert some data.
NULL values are supported. However, R uses the na data construct to represent a
missing value, which is similar to a null.
Consider eliminating dependency on data that cannot be used by R: for example,
rowid and GUID data types from SQL Server cannot be consumed by R and will
generate errors.
Define your primary input data as a SQL query wherever possible to avoid data
movement.
When running code in a stored procedure, you can pass through multiple scalar
inputs. For any parameters that you want to use in the output, add the OUTPUT
keyword.
For example, the following scalar input @model_name contains the model name,
which is also later modified by the R script, and output in its own column in the
results:
SQL
SELECT @local_model_name;
For example, assume your R script contains a formula like this one:
An error is raised if the input dataset does not contain columns with the matching
names ArrDelay, CRSDepTime, DayOfWeek, CRSDepHour, and DayOfWeek.
In some cases, an output schema must be defined in advance for the results.
For example, to insert the data into a table, you must use the WITH RESULT SET
clause to specify the schema.
The output schema is also required if the script uses the argument @parallel=1 .
The reason is that multiple processes might be created by SQL Server to run the
query in parallel, with the results collected at the end. Therefore, the output
schema must be prepared before the parallel processes can be created.
In other cases, you can omit the result schema by using the option WITH RESULT
SETS UNDEFINED. This statement returns the dataset from the script without
naming the columns or specifying the SQL data types.
Consider generating timing or tracking data using T-SQL rather than R/Python.
For example, you could pass the system time or other information used for
auditing and storage by adding a T-SQL call that's passed through to the results,
rather than generating similar data in the script.
If the input query can be parallelized, set @parallel=1 as part of your arguments to
sp_execute_external_script.
Parallel processing with this flag is typically possible any time that SQL Server can
work with partitioned tables or distribute a query among multiple processes and
aggregate the results at the end. Parallel processing with this flag is typically not
possible if you're training models using algorithms that require all data to be read,
or if you need to create aggregates.
Review your code to determine if there are steps that can be performed
independently, or performed more efficiently, by using a separate stored
procedure call. For example, you might get better performance by doing feature
engineering or feature extraction separately and saving the values to a table.
Look for ways to use T-SQL rather than R/Python code for set-based
computations.
User libraries are not supported, regardless of whether you're using a stored
procedure or running R/Python code in the SQL Server compute context.
If you have complex R code, use the R package sqlrutils to convert your code. This
package is designed to help experienced R users write good stored procedure
code.
You rewrite your R code as a single function with clearly defined inputs and
outputs, then use the sqlrutils package to generate the input and outputs in the
correct format. The sqlrutils package generates the complete stored procedure
code for you, and can also register the stored procedure in the database.
For more information and examples, see sqlrutils (SQL).
Users of SQL Server often cannot access files on the server, and SQL client tools
typically do not support the R/Python graphics devices. If you generate plots or
other graphics as part of the solution, consider exporting the plots as binary data
and saving to a table, or writing.
Wrap prediction and scoring functions in stored procedures for direct access by
applications.
Next steps
To view examples of how R and Python solutions can be deployed in SQL Server, see
these tutorials:
R tutorials
Develop a predictive model in R with SQL machine learning
Python tutorials
Predict ski rental with linear regression with SQL machine learning
This article describes the steps for using the sqlrutils package to convert your R code to
run as a T-SQL stored procedure. For best possible results, your code might need to be
modified somewhat to ensure that all inputs can be parameterized.
All variables used by the function should be defined inside the function, or should be
defined as input parameters. See the sample code in this article.
Also, because the input parameters for the R function will become the input parameters
of the SQL stored procedure, you must ensure that your inputs and outputs conform to
the following type requirements:
Inputs
Among the input parameters, there can be at most one data frame.
The objects inside the data frame, as well as all other input parameters of the function,
must be of the following R data types:
POSIXct
numeric
character
integer
logical
raw
If an input type is not one of the above types, it needs to be serialized and passed into
the function as raw. In this case, the function must also include code to deserialize the
input.
Outputs
The function can output one of the following:
A data frame containing the supported data types. All objects in the data frame
must use one of the supported data types.
A named list, containing at most one data frame. All members of the list should
use one of the supported data types.
A NULL, if your function does not return any result
sqlrutils provides functions that define the input data schema and type, and define the
output data schema and type. It also includes functions that can convert R objects to the
required output type. You might make multiple function calls to create the required
objects, depending on the data types your code uses.
Inputs
If your function takes inputs, for each input, call the following functions:
When you make each function call, an R object is created that you will later pass as an
argument to StoredProcedure , to create the complete stored procedure.
Outputs
sqlrutils provides multiple functions for converting R objects such as lists to the
data.frame required by SQL Server.
If your function outputs a data frame directly,
without first wrapping it into a list, you can skip this step.
You can also skip the
conversion this step if your function returns NULL.
When converting a list or getting a particular item from a list, choose from these
functions:
Usage
To illustrate, assume that you want to create a stored procedure named sp_rsample with
these parameters:
Uses an existing function foosql. The function was based on existing code in R
function foo, but you rewrote the function to conform to the requirements as
described in this section, and named the updated function as foosql.
Uses the data frame queryinput as input
Generates as output a data frame with the R variable name, sqloutput
You want to create the T-SQL code as a file in the C:\Temp folder, so that you can
run it using SQL Server Management Studio later
7 Note
Because you are writing the file to the file system, you can omit the arguments that
define the database connection.
The output of the function is a T-SQL stored procedure that can be executed on an
instance of SQL Server 2016 (requires R Services) or SQL Server 2017 (requires Machine
Learning Services with R).
For additional examples, see the package help, by calling help(StoredProcedure) from
an R environment.
Step 4. Register and Run the Stored Procedure
There are two ways that you can run the stored procedure:
Using T-SQL, from any client that supports connections to the SQL Server 2016 or
SQL Server 2017 instance
From an R environment
Both methods require that the stored procedure be registered in the database where
you intend to use the stored procedure.
Using T-SQL. If you are more comfortable with T-SQL, open SQL Server
Management Studio (or any other client that can run SQL DDL commands) and
execute the CREATE PROCEDURE statement using the code prepared by the
StoredProcedure function.
Using R. While you are still in your R environment, you can use the
registerStoredProcedure function in sqlrutils to register the stored procedure with
the database.
For example, you could register the stored procedure sp_rsample in the instance
and database defined in sqlConnStr, by making this R call:
registerStoredProcedure(sp_rsample, sqlConnStr)
) Important
Regardless of whether you use R or SQL, you must run the statement using an
account that has permissions to create new database objects.
Run using R
Some additional preparation is needed if you want to execute the stored procedure
from R code, rather from SQL Server. For example, if the stored procedure requires input
values, you must set those input parameters before the function can be executed, and
then pass those objects to the stored procedure in your R code.
The overall process of calling the prepared SQL stored procedure is as follows:
Example
This example shows the before and after versions of an R script that gets data from a
SQL Server database, performs some transformations on the data, and saves it to a
different database.
This simple example is used only to demonstrate how you might rearrange your R code
to make it easier to convert to a stored procedure.
return(data)
rxOpen(dsSqlFrom)
rxOpen(dsSqlTo)
rxSqlServerDropTable("cleanData")}
rxDataStep(inData = dsSqlFrom,
outFile = dsSqlTo,
transformFunc = xFunc,
transformVars = xVars,
overwrite = TRUE)
7 Note
When you use an ODBC connection rather than invoking the RxSqlServerData
function, you must open the connection using rxOpen before you can perform
operations on the database.
return(data)}
rxDataStep(inData = dsSqlFrom,
outFile = dsSqlTo,
transformFunc = xFunc,
transformVars = xVars,
overwrite = TRUE)
return(NULL)
7 Note
Although you do not need to open the ODBC connection explicitly as part of your
code, an ODBC connection is still required to use sqlrutils.
See also
sqlrutils reference
Native scoring using the PREDICT T-SQL
function with SQL machine learning
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Database
Azure SQL
Managed Instance
Azure Synapse Analytics
Learn how to use native scoring with the PREDICT T-SQL function to generate prediction
values for new data inputs in near-real-time. Native scoring requires that you have an
already-trained model.
The PREDICT function uses the native C++ extension capabilities in SQL machine
learning. This methodology offers the fastest possible processing speed of forecasting
and prediction workloads and support models in Open Neural Network Exchange
(ONNX) format or models trained using the RevoScaleR and revoscalepy packages.
To use native scoring, call the PREDICT T-SQL function and pass the following required
inputs:
The function returns predictions for the input data, together with any columns of source
data that you want to pass through.
Prerequisites
PREDICT is available on:
All editions of SQL Server 2017 and later on Windows and Linux
Azure SQL Managed Instance
Azure SQL Database
Azure SQL Edge
Azure Synapse Analytics
The function is enabled by default. You do not need to install R or Python, or enable
additional features.
Supported models
The model formats supported by the PREDICT function depends on the SQL platform on
which you perform native scoring. See the table below to see which model formats are
supported on which platform.
ONNX models
The model must be in an Open Neural Network Exchange (ONNX) model format.
RevoScale models
The model must be trained in advance using one of the supported rx algorithms listed
below using the RevoScaleR or revoscalepy package.
Serialize the model using rxSerialize for R, and rx_serialize_model for Python. These
serialization functions have been optimized to support fast scoring.
revoscalepy algorithms
rx_lin_mod
rx_logit
rx_btrees
rx_dtree
rx_dforest
RevoScaleR algorithms
rxLinMod
rxLogit
rxBTrees
rxDtree
rxDForest
Examples
SQL
SELECT DATA
FROM dbo.models
WHERE id = 1
);
WITH predict_input
AS (
, CRIM
, ZN
, INDUS
, CHAS
, NOX
, RM
, AGE
, DIS
, RAD
, TAX
, PTRATIO
, B
, LSTAT
FROM [dbo].[features]
SELECT predict_input.id
, p.variable1 AS MEDV
7 Note
Because the columns and values returned by PREDICT can vary by model type, you
must define the schema of the returned data by using a WITH clause.
SQL
GO
USE NativeScoringTest;
GO
GO
);
GO
Use the following statement to populate the data table with data from the iris dataset.
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @input_data_1 = N''
, @output_data_1_name = N'iris_data';
GO
SQL
GO
GO
The following code creates a model based on the iris dataset and saves it to the table
named models.
SQL
EXECUTE sp_execute_external_script
@language = N'R'
, @script = N'
'
VALUES('iris.dtree','v1', @model) ;
7 Note
Be sure to use the rxSerializeModel function from RevoScaleR to save the model.
The standard R serialize function cannot generate the required format.
You can run a statement such as the following to view the stored model in binary
format:
SQL
FROM ml_models;
The following simple PREDICT statement gets a classification from the decision tree
model using the native scoring function. It predicts the iris species based on attributes
you provide, petal length and width.
SQL
SELECT native_model_object
FROM ml_models
go
If you get the error, "Error occurred during execution of the function PREDICT. Model is
corrupt or invalid", it usually means that your query didn't return a model. Check
whether you typed the model name correctly, or if the models table is empty.
7 Note
Because the columns and values returned by PREDICT can vary by model type, you
must define the schema of the returned data by using a WITH clause.
Next steps
PREDICT T-SQL function
SQL machine learning documentation
Machine learning and AI with ONNX in SQL Edge
Deploy and make predictions with an ONNX model in Azure SQL Edge
Score machine learning models with PREDICT in Azure Synapse Analytics
Get Python package information
Article • 02/28/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
This article describes how to get information about installed Python packages, including
versions and installation locations, on Azure SQL Managed Instance Machine Learning
Services. Example Python scripts show you how to list package information such as
installation path and version.
All script or code that runs in-database on SQL Server must load functions from the
instance library. SQL Server can't access packages installed to other libraries. This applies
to remote clients as well: any Python code running in the server compute context can
only use packages installed in the instance library.
To protect server assets, the default
instance library can be modified only by a computer administrator.
SQL
) Important
Run the following SQL statement if you want to verify the default library for the current
instance. This example returns the list of folders included in the Python sys.path
variable. The list includes the current directory and the standard library path.
SQL
EXECUTE sp_execute_external_script
@language =N'Python',
@script=N'import sys; print("\n".join(sys.path))'
For more information about the variable sys.path and how it's used to set the
interpreter's search path for modules, see The Module Search Path .
7 Note
Don't try to install Python packages directly in the SQL package library using pip or
similar methods. Instead, use sqlmlutils to install packages in a SQL instance. For
more information, see Install Python packages with sqlmlutils.
revoscalepy 9.4.7 Used for remote compute contexts, streaming, parallel execution of rx
functions for data import and transformation, modeling, visualization,
and analysis.
For information on which version of Python is included, see Python and R versions.
Component upgrades
By default, Python packages are refreshed through service packs and cumulative
updates. Additional packages and full version upgrades of core Python components are
possible only through product upgrades.
You should never manually overwrite the version of Python installed by SQL Server
Setup with newer versions on the web. Microsoft Python packages are based on
specific versions of Anaconda. Modifying your installation could destabilize it.
SQL
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import pkg_resources
import pandas
For example, the following code looks for the scikit-learn package.
If the package is
found, the code prints the package version.
SQL
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import pkg_resources
pkg_name = "scikit-learn"
try:
version = pkg_resources.get_distribution(pkg_name).version
except:
'
Result:
text
SQL
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import sys
print(sys.version)
'
Next steps
Install new Python packages with sqlmlutils
Install Python packages with sqlmlutils
Article • 02/28/2023
Applies to:
SQL Server 2019 (15.x)
Azure SQL Managed Instance
This article describes how to use functions in the sqlmlutils package to install new
Python packages to an instance of Azure SQL Managed Instance Machine Learning
Services. The packages you install can be used in Python scripts running in-database
using the sp_execute_external_script T-SQL statement.
7 Note
You cannot update or uninstall packages that have been preinstalled on an instance
of SQL Managed Instance Machine Learning Services. To view a list of packages
currently installed, see List all installed Python packages.
For more information about package location and installation paths, see Get Python
package information.
Prerequisites
Install Azure Data Studio on the client computer you use to connect to SQL Server.
You can use other database management or query tools, but this article assumes
Azure Data Studio.
Install the Python kernel in Azure Data Studio. You can also install and use Python
from the command line, and you can use an alternative Python development
environment such as Visual Studio Code with the Python Extension .
The version of Python on the client computer must match the version of Python on
the server, and packages you install must be compliant with the version of Python
you have.
For information on which version of Python is included with each SQL
Server version, see Python and R versions.
To verify the version of Python on a particular SQL Server instance, use the
following T-SQL command.
SQL
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
import sys
print(sys.version)
'
Other considerations
The Python package library is located in the Program Files folder of your SQL
Server instance and, by default, installing in this folder requires administrator
permissions. For more information, see Package library location.
Package installation is specific to the SQL instance, database, and user you specify
in the connection information you provide to sqlmlutils. To use the package in
multiple SQL instances or databases, or for different users, you'll need to install the
package for each one. The exception is that if the package is installed by a member
of dbo , the package is public and is shared with all users. If a user installs a newer
version of a public package, the public package is not affected but that user will
have access to the newer version.
Before adding a package, consider whether the package is a good fit for the SQL
Server environment.
We recommend that you use Python in-database for tasks that benefit from
tight integration with the database engine, such as machine learning, rather
than tasks that simply query the database.
If you add packages that put too much computational pressure on the server,
performance will suffer.
On a hardened SQL Server environment, you might want to avoid the following:
Packages that require network access
Packages that require elevated file system access
Packages used for web development or other tasks that don't benefit by
running inside SQL Server
The Python package tensorflow cannot be installed using sqlmlutils. For more
information and a workaround, see Known issues in SQL Server Machine
Learning Services.
Console
1. Make sure you have pip installed. See pip installation for more information.
2. Download the latest sqlmlutils zip file from
https://github.com/microsoft/sqlmlutils/tree/master/R/dist to the client
computer. Don't unzip the file.
3. Open a Command Prompt and run the following commands to install the
sqlmlutils package. Substitute the full path to the sqlmlutils zip file you
downloaded - this example assumes the downloaded file is c:\temp\sqlmlutils-
1.0.0.zip .
Console
2. Use the following commands to install the text-tools package. Substitute your own
SQL Server database connection information.
Python
import sqlmlutils
sqlmlutils.SQLPackageManager(connection).install("text-tools")
1. Open a Command Prompt and run the following command to create a local folder
that contains the text-tools package. This example creates the folder
c:\temp\text-tools .
Console
2. Copy the text-tools folder to the client computer. The following example
assumes you copied it to c:\temp\packages\text-tools .
In this example, text-tools has no dependencies, so there is only one file from the text-
tools folder for you to install. In contrast, a package such as scikit-plot has 11
dependencies, so you would find 12 files in the folder (the scikit-plot package and the
11 dependent packages), and you would install each of them.
Run the following Python script. Substitute the actual file path and name of the package,
and your own SQL Server database connection information. Repeat the
sqlmlutils.SQLPackageManager statement for each package file in the folder.
Python
import sqlmlutils
connection = sqlmlutils.ConnectionInfo(server="yourserver",
database="yourdatabase", uid="username", pwd="password"))
sqlmlutils.SQLPackageManager(connection).install("text_tools-1.0.0-py3-none-
any.whl")
SQL
EXECUTE sp_execute_external_script
@language = N'Python',
@script = N'
query = "Ipsum"
print(first_match)
'
Python
sqlmlutils.SQLPackageManager(connection).uninstall("text-tools")
For information about any sqlmlutils function, use the Python help function. For
example:
Python
import sqlmlutils
help(SQLPackageManager.install)
Next steps
For information about Python packages installed in SQL Server Machine Learning
Services, see Get Python package information.
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
This article describes how to get information about installed R packages on Azure SQL
Managed Instance Machine Learning Services. Example R scripts show you how to list
package information such as installation path and version.
7 Note
Feature capabilities and installation options vary between versions of SQL Server.
Use the version selector dropdown to choose the appropriate version of SQL
Server.
All script that runs in-database on SQL Server must load functions from the instance
library. SQL Server can't access packages installed to other libraries. This applies to
remote clients as well: any R script running in the server compute context can only use
packages installed in the instance library.
To protect server assets, the default instance
library can be modified only by a computer administrator.
Run the following statement to verify the default R package library for the current
instance:
SQL
EXECUTE sp_execute_external_script
@language = N'R',
GO
RevoScaleR 9.4.7 Used for remote compute contexts, streaming, parallel execution of rx
functions for data import and transformation, modeling, visualization,
and analysis.
Component upgrades
By default, R packages are refreshed through service packs and cumulative updates.
Additional packages and full version upgrades of core R components are possible only
through product upgrades.
For information on which version of R is included with each SQL Server version, see
Python and R versions.
) Important
You should never manually overwrite the version of R installed by SQL Server Setup
with newer versions on the web. Microsoft R packages are based on specific
versions of R. Modifying your installation could destabilize it.
List all installed R packages
The following example uses the R function installed.packages() in a Transact-SQL
stored procedure to display a list of R packages that have been installed in the
R_SERVICES library for the current SQL instance. This script returns package name and
version fields in the DESCRIPTION file.
SQL
EXECUTE sp_execute_external_script
@language=N'R',
@script = N'str(OutputDataSet);
@input_data_1 = N'
'
For more information about the optional and default fields for the R package
DESCRIPTION field, see
https://cran.r-project.org .
For example, the following statement looks for and loads the glue package, if
available.
If the package cannot be located or loaded, you get an error.
SQL
EXECUTE sp_execute_external_script
@language =N'R',
@script=N'
require("glue")
'
SQL
EXECUTE sp_execute_external_script
@language = N'R',
@script = N'
print(packageDescription("MicrosoftML"))
'
Next steps
Install new R packages with sqlmlutils
Install R packages with sqlmlutils
Article • 03/03/2023
Applies to:
SQL Server 2019 (15.x)
Azure SQL Managed Instance
This article describes how to use functions in the sqlmlutils package to install R
packages to an instance of Azure SQL Managed Instance Machine Learning Services. The
packages you install can be used in R scripts running in-database using the
sp_execute_external_script T-SQL statement.
7 Note
You cannot update or uninstall packages that have been preinstalled on an instance
of SQL Managed Instance Machine Learning Services. To view a list of packages
currently installed, see List all installed R packages.
Prerequisites
Install R and RStudio Desktop on the client computer you use to connect to
SQL Server. You can use any R IDE for running scripts, but this article assumes
RStudio.
The version of R on the client computer must match the version of R on the server,
and packages you install must be compliant with the version of R you have.
For
information on which version of R is included with each SQL Server version, see
Python and R versions.
To verify the version of R on a particular SQL Server, use the following T-SQL
command.
SQL
, @script = N'print(R.version)'
Install Azure Data Studio on the client computer you use to connect to SQL Server.
You can use other database management or query tools, but this article assumes
Azure Data Studio.
Other considerations
Package installation is specific to the SQL instance, database, and user you specify
in the connection information you provide to sqlmlutils. To use the package in
multiple SQL instances or databases, or for different users, you'll need to install the
package for each one. The exception is that if the package is installed by a member
of dbo , the package is public and is shared with all users. If a user installs a newer
version of a public package, the public package is not affected but that user will
have access to the newer version.
R script running in SQL Server can use only packages installed in the default
instance library. SQL Server cannot load packages from external libraries, even if
that library is on the same computer. This includes R libraries installed with other
Microsoft products.
On a hardened SQL Server environment, you might want to avoid the following:
Packages that require network access
Packages that require elevated file system access
Packages used for web development or other tasks that don't benefit by
running inside SQL Server
The sqlmlutils package depends on the odbc package, and odbc depends on a number
of other packages. The following procedures install all of these packages in the correct
order.
1. Download the latest sqlmlutils file ( .zip for Windows, .tar.gz for Linux) from
https://github.com/microsoft/sqlmlutils/releases to the client computer. Don't
expand the file.
2. Open a Command Prompt and run the following commands to install the
packages odbc and sqlmlutils. Substitute the path to the sqlmlutils file you
downloaded. The odbc package is found online and installed.
Console
R.exe -e "install.packages('odbc', type='binary')"
The odbc package has a number of dependent packages, and identifying all
dependencies for a package gets complicated. We recommend that you use
miniCRAN to create a local repository folder for the package that includes all the
dependent packages.
For more information, see Create a local R package repository
using miniCRAN.
The sqlmlutils package consists of a single file that you can copy to the client computer
and install.
2. In RStudio, run the following R script to create a local repository of the package
odbc. This example assumes the repository will be created in the folder odbc .
library("miniCRAN")
For the Rversion value, use the version of R installed on SQL Server. To verify the
installed version, use the following T-SQL command.
SQL
, @script = N'print(R.version)'
3. Download the latest sqlmlutils file ( .zip for Windows, .tar.gz for Linux) from
https://github.com/microsoft/sqlmlutils/releases . Don't expand the file.
4. Copy the entire odbc repository folder and the sqlmlutils file to the client
computer.
2. Run the following commands to install odbc and then sqlmlutils. Substitute the full
paths to the odbc repository folder and the sqlmlutils file you copied to this
computer.
Console
1. On the client computer, open RStudio and create a new R Script file.
2. Use the following R script to install the glue package using sqlmlutils. Substitute
your own SQL Server database connection information.
library(sqlmlutils)
server = "server",
database = "database",
uid = "username",
pwd = "password")
Tip
The scope can be either PUBLIC or PRIVATE. Public scope is useful for the
database administrator to install packages that all users can use. Private scope
makes the package available only to the user who installs it. If you don't
specify the scope, the default scope is PRIVATE.
1. Run the following R script to create a local repository for glue. This example
creates the repository folder in c:\downloads\glue .
library("miniCRAN")
For the Rversion value, use the version of R installed on SQL Server. To verify the
installed version, use the following T-SQL command.
SQL
, @script = N'print(R.version)'
2. Copy the entire glue repository folder ( c:\downloads\glue ) to the client computer.
For example, copy it to the folder c:\temp\packages\glue .
2. Use the following R script to install the glue package using sqlmlutils. Substitute
your own SQL Server database connection information (if you don't use Windows
Authentication, add uid and pwd parameters).
library(sqlmlutils)
server= "yourserver",
database = "yourdatabase")
localRepo = "c:/temp/packages/glue"
Tip
The scope can be either PUBLIC or PRIVATE. Public scope is useful for the
database administrator to install packages that all users can use. Private scope
makes the package available only to the user who installs it. If you don't
specify the scope, the default scope is PRIVATE.
1. Open Azure Data Studio and connect to your SQL Server database.
SQL
, @script = N'
library(glue)
print(text)
';
Results
text
For information about any sqlmlutils function, use the R help function or ? operator. For
example:
library(sqlmlutils)
help("sql_install.packages")
Next steps
For information about installed R packages, see Get R package information
For help in working with R packages, see Tips for using R packages
For information about installing Python packages, see Install Python packages with
pip
For more information about SQL Server Machine Learning Services, see What is
SQL Server Machine Learning Services (Python and R)?
Create a local R package repository
using miniCRAN
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
This article describes how to install R packages offline by using miniCRAN to create a
local repository of packages and dependencies. miniCRAN identifies and downloads
packages and dependencies into a single folder that you copy to other computers for
offline R package installation.
You can specify one or more packages, and miniCRAN recursively reads the dependency
tree for these packages. It then downloads only the listed packages and their
dependencies from CRAN or similar repositories.
When it's done, miniCRAN creates an internally consistent repository consisting of the
selected packages and all required dependencies. You can move this local repository to
the server, and proceed to install the packages without an internet connection.
Experienced R users often look for the list of dependent packages in the DESCRIPTION
file of a downloaded package. However, packages listed in Imports might have second-
level dependencies. For this reason, we recommend miniCRAN for assembling the full
collection of required packages.
Easier offline installation: To install a package to an offline server requires that you
also download all package dependencies. Using miniCRAN makes it easier to get
all dependencies in the correct format and avoid dependency errors.
Install miniCRAN
The miniCRAN package itself is dependent on 18 other CRAN packages, among which is
the RCurl package, which has a system dependency on the curl-devel package. Similarly,
package XML has a dependency on libxml2-devel. To resolve dependencies, we
recommend that you build your local repository initially on a machine with full internet
access.
Run the following commands on a computer with a base R, R tools, and internet
connection. It's assumed that this is not your SQL Server computer. The following
commands install the miniCRAN package and the igraph package. This example checks
whether the package is already installed, but you can bypass the if statements and
install the packages directly.
if(!require("miniCRAN")) install.packages("miniCRAN")
if(!require("igraph")) install.packages("igraph")
library("miniCRAN")
Do not add dependencies to this initial list. The igraph package used by miniCRAN
generates the list of dependencies automatically. For more information about how to
use the generated dependency graph, see Using miniCRAN to identify package
dependencies .
2. Optionally, plot the dependency graph. This is not necessary, but it can be
informative.
plot(makeDepGraph(pkgs_needed))
3. Create the local repo. Be sure to change the R version, if necessary, to the version
installed on your SQL Server instance. If you did a component upgrade, your
version might be newer than the original version. For more information, see Get R
package information.
From this information, the miniCRAN package creates the folder structure that you
need to copy the packages to the SQL Server later.
At this point you should have a folder containing the packages you need and any
additional packages that are required. The folder should contain a collection of zipped
packages. Do not unzip the packages or rename any files.
Optionally, run the following code to list the packages contained in the local miniCRAN
repository.
head(pdb);
pdb$Package;
7 Note
The recommended method for installing packages is using sqlmlutils. See Install
new R packages with sqlmlutils.
1. Copy the folder containing the miniCRAN repository, in its entirety, to the server
where you plan to install the packages. The folder typically has this structure:
2. Open an R tool associated with the instance (for example, you could use Rgui.exe).
Right-click and select Run as administrator to allow the tool to make updates to
your system.
3. Get the path for the instance library, and add it to the list of library paths.
4. Specify the new location on the server where you copied the miniCRAN repository
as server_repo .
In this example, we assume that you copied the repository to a temporary folder
on the server.
5. Since you're working in a new R workspace on the server, you must also furnish the
list of packages to install.
6. Install the packages, providing the path to the local copy of the miniCRAN repo.
7. From the instance library, you can view the installed packages using a command
like the following:
installed.packages()
See also
Get R package information
R tutorials
Tips for using R packages
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
This article provides helpful tips on using R packages in SQL Server. These tips are for
DBAs who are unfamiliar with R, and experienced R developers who are unfamiliar with
package access in a SQL Server instance.
If you're new to R
As an administrator installing R packages for the first time, knowing a few basics about
R package management can help you get started.
Package dependencies
R packages frequently depend on multiple other packages, some of which might not be
available in the default R library used by the instance. Sometimes a package requires a
different version of a dependent package than what's already installed. Package
dependencies are noted in a DESCRIPTION file embedded in the package, but are
sometimes incomplete. You can use a package called iGraph to fully articulate the
dependency graph.
If you need to install multiple packages, or want to ensure that everyone in your
organization gets the correct package type and version, we recommend that you use
the miniCRAN package to analyze the complete dependency chain. minicRAN creates
a local repository that can be shared among multiple users or computers. For more
information, see Create a local package repository using miniCRAN.
This path should point to the R_SERVICES folder for the instance. For more information,
including how to determine which packages are already installed, see Get R package
information.
library("c:/Users/<username>/R/win-library/packagename")
This does not work when running R solutions in SQL Server, because R packages must
be installed to a specific default library that is associated with the instance. When a
package is not available in the default library, you get this error when you try to call the
package:
For information on how to install R packages in SQL Server, see Install new R packages
on SQL Server Machine Learning Services or SQL Server R Services.
Also, if a package is installed in the default library, the R runtime loads the package
from the default library, even if you specify a different version in the R code.
Check your code to make sure that there are no calls to uninstalled packages.
Know which package library is associated with the instance. For more information,
see Get R package information.
See also
Install new R packages with sqlmlutils
Monitor Python and R script execution
using custom reports in SQL Server
Management Studio
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
Use custom reports in SQL Server Management Studio (SSMS) to monitor the execution
of external scripts (Python and R), resources used, diagnose problems, and tune
performance in SQL Server Machine Learning Services.
This article explains how to install and use the custom reports provided for SQL Server
Machine Learning Services.
For more information on reports in SQL Server Management Studio, see Custom reports
in Management Studio.
1. Download the SSMS Custom Reports for SQL Server Machine Learning Services
from GitHub.
7 Note
a. Locate the custom reports folder used by SQL Server Management Studio. By
default, custom reports are stored in this folder (where user_name is your
Windows user name):
b. Copy the *.RDL files you downloaded to the custom reports folder.
a. In Management Studio, right-click the Databases node for the instance where
you want to run the reports.
c. In the Open File dialog box, locate the custom reports folder.
d. Select one of the RDL files you downloaded, and then click Open.
Reports
The SSMS Custom Reports repository in GitHub includes the following reports:
Report Description
Active Users who are currently connected to the SQL Server instance and running a
Sessions Python or R script.
Configuration Installation settings of Machine Learning Services and properties of the Python or
R runtime.
Execution Execution statistics of Machine Learning services. For example, you can get the
Statistics total number of external scripts executions and number of parallel executions.
Extended Extended events that are available to get more insights into external scripts
Events execution.
Packages List the R or Python packages installed on the SQL Server instance and their
properties, such as version and name.
Report Description
Resource View the CPU, Memory, IO consumption of SQL Server, and external scripts
Usage execution. You can also view the memory setting for external resource pools.
Next steps
Monitor SQL Server Machine Learning Services using dynamic management views
(DMVs)
Monitor Python and R scripts with extended events in SQL Server Machine
Learning Services
Monitor SQL Server Machine Learning
Services using dynamic management
views (DMVs)
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
Use dynamic management views (DMVs) to monitor the execution of external scripts
(Python and R), resources used, diagnose problems, and tune performance in SQL Server
Machine Learning Services.
In this article, you will find the DMVs that are specific for SQL Server Machine Learning
Services. You will also find example queries that show:
For more general information about DMVs, see System Dynamic Management Views.
Tip
You can also use the custom reports to monitor SQL Server Machine Learning
Services. For more information, see Monitor machine learning using custom
reports in Management Studio.
Run the query below to get this output. For more information on the views and
functions used, see sys.dm_server_registry, sys.configurations, and SERVERPROPERTY.
SQL
, COALESCE(SIGN(SUSER_ID(CONCAT (
CAST(SERVERPROPERTY('MachineName') AS NVARCHAR(128))
, '\SQLRUserGroup'
, CAST(serverproperty('InstanceName') AS NVARCHAR(128))
))), 0) AS ImpliedAuthenticationEnabled
, COALESCE((
FROM sys.dm_server_registry AS r
), - 1) AS IsTcpEnabled
FROM sys.configurations
Column Description
Active sessions
View the active sessions running external scripts.
Run the query below to get this output. For more information on the dynamic
management views used, see sys.dm_exec_requests, sys.dm_external_script_requests,
and sys.dm_exec_sessions.
SQL
FROM sys.dm_exec_requests AS r
ON r.external_script_request_id = er.external_script_request_id
ON s.session_id = r.session_id;
Column Description
session_id Identifies the session associated with each active primary connection.
blocking_session_id ID of the session that is blocking the request. If this column is NULL, the
request is not blocked, or the session information of the blocking session
is not available (or cannot be identified).
login_name SQL Server login name under which the session is currently executing.
Column Description
wait_time If the request is currently blocked, this column returns the duration in
milliseconds, of the current wait. Is not nullable.
wait_type If the request is currently blocked, this column returns the type of wait.
For information about types of waits, see sys.dm_os_wait_stats.
last_wait_type If this request has previously been blocked, this column returns the type
of the last wait.
logical_reads Number of logical reads that have been performed by the request.
degree_of_parallelism Number indicating the number of parallel processes that were created.
This value might be different from the number of parallel processes that
were requested.
external_user_name The Windows worker account under which the script was executed.
Execution statistics
View the execution statistics for the external runtime for R and Python. Only statistics of
RevoScaleR, revoscalepy, or microsoftml package functions are currently available.
Run the query below to get this output. For more information on the dynamic
management view used, see sys.dm_external_script_execution_stats. The query only
returns functions that have been executed more than once.
SQL
FROM sys.dm_external_script_execution_stats
Column Description
counter_value Total number of instances that the registered external script function has been
called on the server. This value is cumulative, beginning with the time that the
feature was installed on the instance, and cannot be reset.
Performance counters
View the performance counters related to the execution of external scripts.
Run the query below to get this output. For more information on the dynamic
management view used, see sys.dm_os_performance_counters.
SQL
FROM sys.dm_os_performance_counters
Counter Description
Parallel Number of times that a script included the @parallel specification and that SQL
Executions Server was able to generate and use a parallel query plan.
Streaming Number of times that the streaming feature has been invoked.
Executions
Counter Description
SQL CC Number of external scripts run where the call was instantiated remotely and SQL
Executions Server was used as the compute context.
Implied Number of times that an ODBC loopback call was made using implied
Auth. authentication; that is, the SQL Server executed the call on behalf of the user
Logins sending the script request.
Execution Number of times scripts reported errors. This count does not include R or Python
Errors errors.
Memory usage
View information about the memory used by the OS, SQL Server, and the external pools.
Run the query below to get this output. For more information on the dynamic
management views used, see sys.dm_resource_governor_external_resource_pools and
sys.dm_os_sys_info.
SQL
, (SELECT SUM(peak_memory_kb)
FROM sys.dm_resource_governor_external_resource_pools AS ep
) AS external_pool_peak_memory_kb
FROM sys.dm_os_sys_info;
Column Description
Run the query below to get this output. For more information on the views used, see
sys.configurations and sys.dm_resource_governor_external_resource_pools.
SQL
END AS max_memory_percent
FROM sys.configurations AS c
UNION ALL
Column Description
max_memory_percent The maximum memory that SQL Server or the external resource pool can
use.
Resource pools
In SQL Server Resource Governor, a resource pool represents a subset of the physical
resources of an instance. You can specify limits on the amount of CPU, physical IO, and
memory that incoming application requests, including execution of external scripts, can
use within the resource pool. View the resource pools used for SQL Server and external
scripts.
Run the query below to get this output. For more information on the dynamic
management views used, see sys.dm_resource_governor_resource_pools and
sys.dm_resource_governor_external_resource_pools.
SQL
, p.total_cpu_usage_ms, p.read_io_completed_total,
p.write_io_completed_total
FROM sys.dm_resource_governor_resource_pools AS p
UNION ALL
Column Description
pool_name Name of the resource pool. SQL Server resource pools are prefixed
with SQL Server and external resource pools are prefixed with
External Pool .
read_io_completed_total The total read IOs completed since the Resource Governor statistics
were reset.
write_io_completed_total The total write IOs completed since the Resource Governor statistics
were reset.
Installed packages
You can to view the R and Python packages that are installed in SQL Server Machine
Learning Services by executing an R or Python script that outputs these.
SQL
, @script = N'
Column Description
Depends Lists the package(s) that the installed package depends on.
SQL
, @script = N'
import pkg_resources
import pandas
Column Description
Next steps
Extended events for machine learning
Resource Governor Related Dynamic Management Views
System Dynamic Management Views
Monitor machine learning using custom reports in Management Studio
Monitor Python and R scripts with
extended events in SQL Server Machine
Learning Services
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
Learn how to use extended events to monitor and troubleshooting operations related to
the SQL Server Machine Learning Services, SQL Server Launchpad, and Python or R jobs
external scripts.
SQL
FROM sys.dm_xe_objects o
JOIN sys.dm_xe_packages p
ON o.package_guid = p.guid
For more information about how to use extended events, see Extended Events Tools.
For more information about how to do this, see the section, Collecting events from
external processes.
Table of extended events
Event Description Notes
satellite_abort_connection Abort
connection
record
satellite_data_receive_completion Fires when all Fired only from external process. See
the required instructions on collecting events from
data by a external processes.
query is
received over
the satellite
connection.
satellite_invalid_sized_message Message's
size is not
valid
satellite_message_summary summary
information
about
messaging
satellite_message_version_mismatch Message's
version field is
not matched
satellite_sessionId_mismatch Message's
session ID is
not expected
Server\MSSQL_version_number.MSSQLSERVER\MSSQL\Binn .
BXLServer is the satellite process that supports SQL extensibility with external
script languages, such as R or Python. A separate instance of BxlServer is launched
for each external language instance.
To capture events related to BXLServer, place the .xml file in the R or Python
installation directory. In a default installation, this would be:
64 .
The configuration file must be named the same as the executable, using the format "
[name].xevents.xml". In other words, the files must be named as follows:
Launchpad.xevents.xml
bxlserver.xevents.xml
XML
<event_sessions>
</target>
</event_session>
</event_sessions>
To configure the trace, edit the session name placeholder, the placeholder for the
filename ( [SessionName].xel ), and the names of the events you want to capture,
For example, [XEvent Name 1] , [XEvent Name 1] ).
Any number of event package tags may appear, and will be collected as long as
the name attribute is correct.
XML
<event_sessions>
</target>
</event_session>
</event_sessions>
Place the .xml file in the Binn directory for the SQL Server instance.
This file must be named Launchpad.xevents.xml .
XML
<event_sessions>
<event package="SQLSatellite"
name="satellite_unexpected_message_received" />
</target>
</event_session>
</event_sessions>
Place the .xml file in the same directory as the BXLServer executable.
This file must be named bxlserver.xevents.xml .
Next steps
Monitor Python and R script execution using custom reports in SQL Server
Management Studio
Monitor SQL Server Machine Learning Services using dynamic management views
(DMVs)
Monitor PREDICT T-SQL statements
with extended events in SQL Server
Machine Learning Services
Article • 03/03/2023
Applies to:
SQL Server 2017 (14.x) and later
Azure SQL Managed Instance
Learn how to use extended events to monitor and troubleshooting PREDICT T-SQL
statements in SQL Server Machine Learning Services.
SQL
SELECT *
FROM sys.dm_xe_object_columns
Examples
To capture information about performance of a scoring session using PREDICT:
The value for predict_function_completed shows how much time the query spent
on loading the model and scoring.
The boolean value for predict_model_cache_hit indicates whether the query used
a cached model or not.
SQL
SELECT *
FROM sys.dm_os_memory_clerks
SELECT *
FROM sys.dm_os_memory_objects
Next steps
For more information about extended events (sometimes called XEvents), and how to
track events in a session, see these articles:
Monitor Python and R scripts with extended events in SQL Server Machine
Learning Services
Extended Events concepts and architecture
Set up event capture in SSMS
Manage event sessions in the Object Explorer
Grant database users permission to
execute Python and R scripts with SQL
Server Machine Learning Services
Article • 03/03/2023
Applies to:
SQL Server 2016 (13.x) and later
Azure SQL Managed Instance
Learn how you can give a database user permission to run external Python and R scripts
in SQL Server Machine Learning Services and give read, write, or data definition
language (DDL) permissions to databases.
For more information, see the permissions section in Security overview for the
extensibility framework.
To grant permission to a database user to execute external script, run the following
script:
SQL
USE <database_name>
GO
7 Note
Permissions are not specific to the supported script language. In other words, there
are not separate permission levels for R script versus Python script.
For each database user account or SQL login that is running R or Python scripts, ensure
that it has the appropriate permissions on the specific database:
For example, the following Transact-SQL statement gives the SQL login MySQLLogin the
rights to run T-SQL queries in the ML_Samples database. To run this statement, the SQL
login must already exist in the security context of the server. For more information, see
sp_addrolemember (Transact-SQL).
SQL
USE ML_Samples
GO
Next steps
For more information about the permissions included in each role, see Database-level
roles.
Linked Servers (Database Engine)
Article • 03/03/2023
Applies to:
SQL Server
Azure SQL Managed Instance
Linked servers enable the SQL Server Database Engine and Azure SQL Managed
Instance to read data from the remote data sources and execute commands against the
remote database servers (for example, OLE DB data sources) outside of the instance of
SQL Server. Typically linked servers are configured to enable the Database Engine to
execute a Transact-SQL statement that includes tables in another instance of SQL Server,
or another database product such as Oracle. Many types OLE DB data sources can be
configured as linked servers, including third-party database providers and Azure
CosmosDB.
7 Note
Linked servers are available in SQL Server Database Engine and Azure SQL
Managed Instance. They are not enabled in Azure SQL Database singleton and
elastic pools. There are some constraints in Managed Instance that can be found
here.
You can configure a linked server by using SQL Server Management Studio or by using
the sp_addlinkedserver (Transact-SQL) statement. OLE DB providers vary greatly in the
type and number of parameters required. For example, some providers require you to
provide a security context for the connection using sp_addlinkedsrvlogin (Transact-SQL).
Some OLE DB providers allow SQL Server to update data on the OLE DB source. Others
provide only read-only data access. For information about each OLE DB provider,
consult documentation for that OLE DB provider.
An OLE DB provider
An OLE DB provider is a DLL that manages and interacts with a specific data source. An
OLE DB data source identifies the specific database that can be accessed through OLE
DB. Although data sources queried through linked server definitions are ordinarily
databases, OLE DB providers exist for a variety of files and file formats. These include
text files, spreadsheet data, and the results of full-text content searches.
Starting with SQL Server 2019 (15.x), the Microsoft OLE DB Driver for SQL Server
(MSOLEDBSQL) (PROGID: MSOLEDBSQL) is the default OLE DB provider. In earlier
versions, the SQL Server Native Client OLE DB provider (SQLNCLI) (PROGID: SQLNCLI11)
was the default OLE DB provider.
) Important
The SQL Server Native Client (often abbreviated SNAC) has been removed from
SQL Server 2022 (16.x) and SQL Server Management Studio 19 (SSMS). Both the
SQL Server Native Client OLE DB provider (SQLNCLI or SQLNCLI11) and the legacy
Microsoft OLE DB Provider for SQL Server (SQLOLEDB) are not recommended for
new development. Switch to the new Microsoft OLE DB Driver (MSOLEDBSQL) for
SQL Server going forward.
Linked servers to Microsoft Access and Excel sources are only supported by Microsoft
when using the 32-bit Microsoft.JET.OLEDB.4.0 OLE DB provider.
7 Note
SQL Server distributed queries are designed to work with any OLE DB provider that
implements the required OLE DB interfaces. However, SQL Server has been tested
against the default OLE DB provider.
Linked server details
The following illustration shows the basics of a linked server configuration.
Typically, linked servers are used to handle distributed queries. When a client application
executes a distributed query through a linked server, SQL Server parses the command
and sends requests to OLE DB. The rowset request may be in the form of executing a
query against the provider or opening a base table from the provider.
7 Note
For a data source to return data through a linked server, the OLE DB provider (DLL)
for that data source must be present on the same server as the instance of SQL
Server.
) Important
When an OLE DB provider is used, the account under which the SQL Server service
runs must have read and execute permissions for the directory, and all
subdirectories, in which the provider is installed. This includes Microsoft-released
providers, and any third-party providers.
7 Note
Linked servers support Active Directory pass-through authentication when using
full delegation. Starting with SQL Server 2017 (14.x) CU17, pass-through
authentication with constrained delegation is also supported; however, resource-
based constrained delegation is not supported.
Manage providers
There is a set of options that control how SQL Server loads and uses OLE DB providers
that are specified in the registry.
You can use stored procedures and catalog views to manage linked server definitions:
View information about the linked servers defined in a specific instance of SQL
Server by running a query against the sys.servers system catalog view.
Delete a linked server definition by running sp_dropserver . You can also use this
stored procedure to remove a remote server.
You can also define linked servers by using SQL Server Management Studio. In the
Object Explorer, right-click Server Objects, select New, and select Linked Server. You
can delete a linked server definition by right-clicking the linked server name and
selecting Delete.
When you execute a distributed query against a linked server, include a fully qualified,
four-part table name for each data source to query. This four-part name should be in
the form linked_server_name.catalog.schema.object_name.
7 Note
Linked servers can be defined to point back (loop back) to the server on which they
are defined. Loopback servers are most useful when testing an application that uses
distributed queries on a single server network. Loopback linked servers are
intended for testing and are not supported for many operations, such as
distributed transactions.
7 Note
Existing definitions of linked servers that were configured for pass-through mode
will support Azure AD authentication. The only requirement for this would be to
add Managed Instances to Server Trust Group.
See also
sys.servers (Transact-SQL)
sp_linkedservers (Transact-SQL)
Next steps
Create Linked Servers (SQL Server Database Engine)
sp_addlinkedserver (Transact-SQL)
sp_addlinkedsrvlogin (Transact-SQL)
sp_dropserver (Transact-SQL)
Service Broker
Article • 11/18/2022
Applies to:
SQL Server
Azure SQL Managed Instance
SQL Server Service Broker provide native support for messaging and queuing in the SQL
Server Database Engine and Azure SQL Managed Instance. Developers can easily create
sophisticated applications that use the Database Engine components to communicate
between disparate databases, and build distributed and reliable applications.
Overview
Service Broker is a message delivery framework that enables you to create native in-
database service-oriented applications. Unlike classic query processing functionalities
that constantly read data from the tables and process them during the query lifecycle, in
service-oriented application you have database services that are exchanging the
messages. Every service has a queue where the messages are placed until they are
processed.
The messages in the queues can be fetched using the Transact-SQL RECEIVE command
or by the activation procedure that will be called whenever the message arrives in the
queue.
Creating services
Database services are created by using the CREATE SERVICE Transact SQL statement.
Service can be associated with the message queue create by using the CREATE QUEUE
statement:
SQL
GO
ON QUEUE dbo.ExpenseQueue;
Sending messages
Messages are sent on the conversation between the services using the SEND Transact-
SQL statement. A conversation is a communication channel that is established between
the services using the BEGIN DIALOG Transact-SQL statement.
SQL
TO SERVICE 'ExpensesService';
Processing messages
The messages that are placed in the queue can be selected by using a standard SELECT
query. The SELECT statement will not modify the queue and remove the messages. To
read and pull the messages from the queue, you can use the RECEIVE Transact-SQL
statement.
SQL
FROM ExpenseQueue;
Once you process all messages from the queue, you should close the conversation using
the END CONVERSATION Transact-SQL statement.
Data Definition Language (DDL) Statements (Transact-SQL) for CREATE, ALTER, and
DROP statements
See the previously published documentation for Service Broker concepts and for
development and management tasks. This documentation is not reproduced in the SQL
Server documentation due to the small number of changes in Service Broker in recent
versions of SQL Server.
CREATE ROUTE : You can't use CREATE ROUTE with ADDRESS other than LOCAL or
DNS name of another SQL Managed Instance. Port specified must be 4022. See
CREATE ROUTE.
ALTER ROUTE : You can't use ALTER ROUTE with ADDRESS other than LOCAL or DNS
name of another SQL Managed Instance. Port specified must be 4022. See See
ALTER ROUTE.
Service broker is enabled by default and cannot be disabled. The following ALTER
DATABASE options are not supported:
ENABLE_BROKER
DISABLE_BROKER
No significant changes were introduced in SQL Server 2019 (15.x). The following
changes were introduced in SQL Server 2012 (11.x).
Next steps
The most common use of Service Broker is for event notifications. Learn how to
implement event notifications, configure dialog security, or get more information.
Database Mail
Article • 02/28/2023
Applies to:
SQL Server
Azure SQL Managed Instance
Database Mail is an enterprise solution for sending e-mail messages from the SQL
Server Database Engine or Azure SQL Managed Instance. Your applications can send e-
mail messages to users using Database Mail via an external SMTP server. The messages
can contain query results, and can also include files from any resource on your network.
7 Note
Database Mail is available in SQL Server Database Engine and Azure SQL Managed
Instance, but not in Azure SQL database singleton and elastic pools. For more
information on using Database Mail in Azure SQL Managed Instance, see Automate
management tasks using SQL Agent jobs in Azure SQL Managed Instance.
Reliability
Database Mail uses the standard Simple Mail Transfer Protocol (SMTP) to send
mail. You can use Database Mail without installing an Extended MAPI client on the
computer that runs SQL Server.
Process isolation. To minimize the impact on SQL Server, the component that
delivers e-mail runs outside of SQL Server, in a separate process. SQL Server will
continue to queue e-mail messages even if the external process stops or fails. The
queued messages will be sent once the outside process or SMTP server comes
online.
Failover accounts. A Database Mail profile allows you to specify more than one
SMTP server. Should an SMTP server be unavailable, mail can still be delivered to
another SMTP server.
Scalability
Background Delivery: Database Mail provides background, or asynchronous,
delivery. When you call sp_send_dbmail to send a message, Database Mail adds a
request to a Service Broker queue. The stored procedure returns immediately. The
external e-mail component receives the request and delivers the e-mail.
Multiple profiles: Database Mail allows you to create multiple profiles within a SQL
Server instance. Optionally, you can choose the profile that Database Mail uses
when you send a message.
Multiple accounts: Each profile can contain multiple failover accounts. You can
configure different profiles with different accounts to distribute e-mail across
multiple e-mail servers.
Security
Off by default: To reduce the surface area of SQL Server, Database Mail stored
procedures are disabled by default.
Profile security: Database Mail enforces security for mail profiles. You choose the
msdb database users or groups that have access to a Database Mail profile. You can
grant access to either specific users, or all users in msdb . A private profile restricts
access to a specified list of users. A public profile is available to all users in a
database.
Database Mail runs under the SQL Server Engine service account. To attach a file
from a folder to an email, the SQL Server engine account should have permissions
to access the folder with the file.
Supportability
Integrated configuration: Database Mail maintains the information for e-mail
accounts within SQL Server Database Engine. There is no need to manage a mail
profile in an external client application. Database Mail Configuration Wizard
provides a convenient interface for configuring Database Mail. You can also create
and maintain Database Mail configurations using Transact-SQL.
Logging. Database Mail logs e-mail activity to SQL Server, the Microsoft Windows
application event log, and to tables in the msdb database.
Auditing: Database Mail keeps copies of messages and attachments sent in the
msdb database. You can easily audit Database Mail usage and review the retained
messages.
Support for HTML: Database Mail allows you to send e-mail formatted as HTML.
Database Mail stores configuration and security information in the msdb database.
Configuration and security objects create profiles and accounts used by Database
Mail.
Messaging components
The msdb database acts as the mail-host database that holds the messaging
objects that Database Mail uses to send e-mail. These objects include the
sp_send_dbmail stored procedure and the data structures that hold information
about messages.
The Database Mail executable is an external program that reads from a queue in
the msdb database and sends messages to e-mail servers.
2 Warning
Individual job steps within a job can also send e-mail without configuring SQL
Server Agent to use Database Mail. For example, a Transact-SQL job step can use
Database Mail to send the results of a query to a list of recipients.
You can configure SQL Server Agent to send e-mail messages to predefined operators
when:
See also
Database Mail Configuration Objects
Database Mail Messaging Objects
Database Mail External Program
Database Mail Log and Audits
Next steps
Configure Database Mail
Configure SQL Server Agent Mail to Use Database Mail
Automate management tasks using SQL Agent jobs in Azure SQL Managed
Instance
Migrate SQL Managed Instance to
availability zone support
Article • 05/26/2023
) Important
Zone redundancy for SQL Managed Instance is currently in Preview. To learn which
regions support SQL Instance zone redundancy, see Services support by region.
SQL Managed Instance offers a zone redundant configuration that uses Azure
availability zones to replicate your instances across multiple physical locations within an
Azure region. With zone redundancy enabled, your Business Critical managed instances
become resilient to a larger set of failures, such as catastrophic datacenter outages,
without any changes to application logic. For more information on the availability model
for SQL Database, see Business Critical service tier zone redundant availability section in
the Azure SQL documentation.
This guide describes how to migrate SQL Managed Instances that use Business Critical
service tier from non-availability zone support to availability zone support. Once the
zone redundant option is enabled, Azure SQL Managed Instance automatically
reconfigures the instance.
Prerequisites
To migrate to availability-zone support:
1. Your instance must be running under Business Critical tier with the November 2022
feature wave update. To learn more about how to onboard an existing SQL
managed instance to the November 2022 update, see November 2022 Feature
Wave for Azure SQL Managed Instance
2. Confirm that your instance is located in a supported region. To see the list of
supported regions, see Premium and Business Critical service tier zone redundant
availability:
Downtime requirements
All scaling operations in Azure SQL are online operations and require minimal to no
downtime. For more details on Azure SQL dynamic scaling, see Dynamically scale
database resources with minimal downtime.
Azure portal
2. Go to the instance of SQL Managed Instance that you want to enable for zone
redundancy.
3. In the Create Azure SQL Managed Instance tab, select Configure Managed
Instance.
4. In the Compute + Storage page, select Yes to make the instance zone
redundant.
6. Select Apply.
Next steps
Get started with SQL Managed Instance with our Quick Start reference guide
Learn more about Azure SQL Managed Instance zone redundancy and high
availability
SQL Server on Azure VM documentation
Find concepts, quickstarts, tutorials, and samples for SQL Server installed to Azure virtual
machines, both Windows and Linux.
f QUICKSTART
q VIDEO
e OVERVIEW
What's new?
Security considerations
Performance guidelines
Pricing guidance
Manage
p CONCEPT
Automated patching
Business continuity
e OVERVIEW
Availability groups
c HOW-TO GUIDE
g TUTORIAL
d TRAINING
Reference
` DEPLOY
Azure portal
Azure CLI
PowerShell samples
a DOWNLOAD
i REFERENCE
Migration guide
Transact-SQL (T-SQL)
Azure CLI
PowerShell
REST API
What's new with SQL Server on Azure
Virtual Machines?
Article • 07/14/2023
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, either
manually, or through a built-in image, you can use Azure features to improve your
experience. This article summarizes the documentation changes associated with new
features and improvements in the recent releases of SQL Server on Azure Virtual
Machines (VMs) . To learn more about SQL Server on Azure VMs, see the overview.
For updates made in previous years, see the What's new archive.
July 2023
7 Note
SQL Server 2008 and SQL Server 2008 R2 are out of extended support and no
longer available from the Azure Marketplace.
May 2023
Changes Details
Azure SQL bindings for Azure Functions GA Azure Functions supports input bindings, and
output bindings for the Azure SQL and SQL
Server products. This feature is now generally
available. Review Azure SQL bindings for
Azure Functions to learn more.
April 2023
Changes Details
Auto upgrade SQL It's now possible to enable auto upgrade for your SQL IaaS Agent
IaaS Agent extension to ensure you're automatically receiving the latest updates to
extension the extension every month. Review SQL IaaS Agent Settings to learn more.
March 2023
Changes Details
Removed The architecture for the SQL IaaS Agent extension has been updated to
extension remove management modes. All newly deployed SQL Server VMs are
management registered with the extension by using the same default configuration and
modes least privileged security model. To learn more, review Management modes.
February 2023
Changes Details
Enable Azure AD We've published a guide to help you enable Azure AD authentication for
for SQL Server your SQL Server VM. Review Configure Azure AD to learn more.
January 2023
Changes Details
Extend your multi- Extend an existing multi-subnet availability group, either on Azure virtual
subnet AG to machines, or on-premises, to another region in Azure. To learn more,
multiple regions review Multi-subnet availability group in multiple regions.
2022
Changes Details
Troubleshoot SQL We've added an article to help you troubleshoot and address some
IaaS Agent extension known issues with the SQL Server IaaS agent extension. To learn more,
read Troubleshoot known issues.
Azure AD It's now possible to configure Azure Active Directory (Azure AD)
authentication authentication to your SQL Server 2022 on Azure VM by using the Azure
portal. This feature is currently in preview. To get started, review Azure
AD with SQL Server VMs.
Least privilege There is a new permissions model available for the SQL Server IaaS
permission model for Agent extension that grants the least privileged permission for each
SQL IaaS Agent feature used by the extension. To learn more, review SQL IaaS Agent
extension extension permissions.
Confidential VMs SQL Server on Azure VMs has added support to deploy to SQL Server on
Azure confidential VMs. To get started, review the Quickstart: Deploy
SQL Server to an Azure confidential VM.
Azure CLI for SQL It's now possible to configure the SQL best practices assessment feature
best practices using the Azure CLI.
assessment
Configure tempdb It's now possible to configure your tempdb settings, such as the number
from Azure portal of files, initial size, and autogrowth ratio for an existing SQL Server
instance by using the Azure portal. See manage SQL Server VM from
portal to learn more.
SDK-style SQL Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
projects Database Projects extension in Azure Data Studio or VS Code. This
feature is currently in preview. To learn more, see SDK-style SQL projects.
Security best The SQL Server VM security best practices have been rewritten and
practices refreshed!
Migrate with It's now possible to migrate your database(s) from a standalone instance
distributed AG of SQL Server or an entire availability group over to SQL Server on Azure
Changes Details
Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.
Additional resources
Windows VMs:
Linux VMs:
This article provides an overview of SQL Server on Azure Virtual Machines (VMs) on the
Windows platform.
If you're new to SQL Server on Azure VMs, check out the SQL Server on Azure VM
Overview video from our in-depth Azure SQL video series:
https://learn.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-
Overview-4-of-61/player
Overview
SQL Server on Azure Virtual Machines enables you to use full versions of SQL Server in
the cloud without having to manage any on-premises hardware. SQL Server virtual
machines (VMs) also simplify licensing costs when you pay as you go.
Azure virtual machines run in many different geographic regions around the world.
They also offer various machine sizes. The virtual machine image gallery allows you to
create a SQL Server VM with the right version, edition, and operating system. This makes
virtual machines a good option for many different SQL Server workloads.
Feature benefits
When you register your SQL Server on Azure VM with the SQL IaaS Agent extension you
unlock a number of feature benefits. Registering with the extension is completely free.
Feature Description
Portal Unlocks management in the portal, so that you can view all of your SQL
management Server VMs in one place, and enable or disable SQL specific features directly
from the portal.
Automated Automates the scheduling of backups for all databases for either the default
backup instance or a properly installed named instance of SQL Server on the VM. For
more information, see Automated backup for SQL Server in Azure virtual
Feature Description
machines (Resource Manager).
Azure Key Vault Enables you to automatically install and configure Azure Key Vault on your
integration SQL Server VM. For more information, see Configure Azure Key Vault
integration for SQL Server on Azure Virtual Machines (Resource Manager).
Flexible version / If you decide to change the version or edition of SQL Server, you can update
edition the metadata within the Azure portal without having to redeploy the entire
SQL Server VM.
Configure You can configure your tempdb directly from the Azure portal, such as
tempdb specifying the number of files, their initial size, their location, and the
autogrowth ratio. Restart your SQL Server service for the changes to take
effect.
Defender for If you've enabled Microsoft Defender for SQL, then you can view Defender
Cloud portal for Cloud recommendations directly in the SQL virtual machines resource of
integration the Azure portal. See Security best practices to learn more.
SQL best practices Enables you to assess the health of your SQL Server VMs using configuration
assessment best practices. For more information, see SQL best practices assessment.
View disk Allows you to view a graphical representation of the disk utilization of your
utilization in SQL data files in the Azure portal.
Feature Description
portal
Requires SQL IaaS Agent extension.
Getting started
To get started with SQL Server on Azure VMs, review the following resources:
Create SQL VM: To create your SQL Server on Azure VM, review the Quickstarts
using the Azure portal, Azure PowerShell or an ARM template. For more thorough
guidance, review the Provisioning guide.
Connect to SQL VM: To connect to your SQL Server on Azure VMs, review the ways
to connect.
Migrate data: Migrate your data to SQL Server on Azure VMs from SQL Server,
Oracle, or Db2.
Storage configuration: For information about configuring storage for your SQL
Server on Azure VMs, review Storage configuration.
Performance: Fine-tune the performance of your SQL Server on Azure VM by
reviewing the Performance best practices checklist.
Pricing: For information about the pricing structure of your SQL Server on Azure
VM, review the Pricing guidance.
Frequently asked questions: For commonly asked questions, and scenarios, review
the FAQ.
Videos
For videos about the latest features to optimize SQL Server VM performance and
automate management, review the following Data Exposed videos:
To learn more, see the overview of Always On availability groups, and Always On failover
cluster instances. For more details, see the business continuity overview.
To get started, see the tutorials for availability groups or preparing your VM for a
failover cluster instance.
Licensing
To get started, choose a SQL Server virtual machine image with your required version,
edition, and operating system. The following sections provide direct links to the Azure
portal for the SQL Server virtual machine gallery images. Change the licensing model of
a pay-per-usage SQL Server VM to use your own license. For more information, see How
to change the licensing model for a SQL Server VM.
Azure only maintains one virtual machine image for each supported operating system,
version, and edition combination. This means that over time images are refreshed, and
older images are removed. For more information, see the Images section of the SQL
Server VMs FAQ.
Tip
For more information about how to understand pricing for SQL Server images, see
Pricing guidance for SQL Server on Azure Virtual Machines.
SQL Server 2008 and SQL Server 2008 R2 are out of extended support and no
longer available from the Azure Marketplace.
To see the available SQL Server on Linux virtual machine images, see Overview of SQL
Server on Azure Virtual Machines (Linux).
It's possible to deploy an older image of SQL Server that isn't available in the Azure
portal by using PowerShell. To view all available images by using PowerShell, use the
following command:
PowerShell
For more information about deploying SQL Server VMs using PowerShell, view How to
provision SQL Server virtual machines with Azure PowerShell.
) Important
Older images might be outdated. Remember to apply all SQL Server and Windows
updates before using them for production.
Next steps
Get started with SQL Server on Azure Virtual Machines:
View Reference Architectures for running N-tier applications on SQL Server in IaaS
Applies to:
SQL Server on Azure VM
The SQL Server IaaS Agent extension (SqlIaasExtension) runs on SQL Server on Azure
Windows Virtual Machines (VMs) to automate management and administration tasks.
This article provides an overview of the extension. To install the SQL Server IaaS Agent
extension to SQL Server on Azure VMs, see the articles for Automatic registration,
Register single VMs, or Register VMs in bulk.
7 Note
To learn more about the Azure VM deployment and management experience, including
recent improvements, see:
Azure SQL VM: Automate Management with the SQL Server IaaS Agent extension
(Ep. 2)
Azure SQL VM: New and Improved SQL on Azure VM deployment and
management experience (Ep.8) | Data Exposed.
Overview
The SQL Server IaaS Agent extension allows for integration with the Azure portal, and
unlocks a number of benefits for SQL Server on Azure VMs:
Integration with centrally managed Azure Hybrid Benefit: SQL Server VMs
registered with the extension can integrate with Centrally managed Azure Hybrid
Benefit, making it easy manage the Azure Hybrid Benefit for your SQL Server VMs
at scale.
Azure portal
You can use the SQL virtual machines resource in the Azure portal to quickly
identify SQL Server VMs that are using the Azure Hybrid Benefit.
Enable auto upgrade to ensure you're getting the latest updates to the extension each
month.
Management modes
Prior to March 2023, the SQL IaaS Agent extension relied on management modes to
define the security model, and unlock feature benefits. In March 2023, the extension
architecture was updated to remove management modes entirely, instead relying on the
principle of least privilege to give customers control over how they want to use the
extension on a feature-by-feature basis.
Starting in March 2023, when you first register with the extension, binaries are saved to
your virtual machine to provide you with basic functionality such as license
management. Once you enable any feature that relies on the agent, the binaries are
used to install the SQL IaaS Agent to your virtual machine, and permissions are assigned
to the SQL IaaS Agent service as needed by each feature that you enable.
Feature benefits
The SQL Server IaaS Agent extension unlocks a number of feature benefits for managing
your SQL Server VM, letting you pick and choose which benefit suits your business
needs. When you first register with the extension, the functionality is limited to a few
features that don't rely on the SQL IaaS Agent. Once you enable a feature that requires
it, the agent is installed to the SQL Server VM.
The following table details the benefits available through the SQL IaaS Agent extension,
and whether or not the agent is required:
Feature Description
Portal Unlocks management in the portal, so that you can view all of your SQL Server
management VMs in one place, and enable or disable SQL specific features directly from the
portal.
Automated Automates the scheduling of backups for all databases for either the default
backup instance or a properly installed named instance of SQL Server on the VM. For
more information, see Automated backup for SQL Server in Azure virtual
machines (Resource Manager).
Automated Configures a maintenance window during which important Windows and SQL
patching Server security updates to your VM can take place, so you can avoid updates
during peak times for your workload. For more information, see Automated
patching for SQL Server in Azure virtual machines (Resource Manager).
Azure Key Enables you to automatically install and configure Azure Key Vault on your SQL
Vault Server VM. For more information, see Configure Azure Key Vault integration for
integration SQL Server on Azure Virtual Machines (Resource Manager).
Flexible If you decide to change the version or edition of SQL Server, you can update the
version / metadata within the Azure portal without having to redeploy the entire SQL
edition Server VM.
Configure You can configure your tempdb directly from the Azure portal, such as specifying
tempdb the number of files, their initial size, their location, and the autogrowth ratio.
Restart your SQL Server service for the changes to take effect.
Defender for If you've enabled Microsoft Defender for SQL, then you can view Defender for
Cloud portal Cloud recommendations directly in the SQL virtual machines resource of the
integration Azure portal. See Security best practices to learn more.
SQL best Enables you to assess the health of your SQL Server VMs using configuration best
practices practices. For more information, see SQL best practices assessment.
assessment
Requires SQL IaaS Agent extension.
View disk Allows you to view a graphical representation of the disk utilization of your SQL
utilization in data files in the Azure portal.
portal
Requires SQL IaaS Agent extension.
Permissions models
There are two permission models for the SQL Server IaaS Agent extension - either full
sysadmin rights, or the principle of least privilege. The least privileged permission model
grants the minimum permissions required for each feature that you enable. Each feature
that you use is assigned a custom role in SQL Server, and the custom role is only
granted permissions that are required to perform actions related to the feature.
The principle of least privilege model is enabled by default for SQL Server VMs deployed
via Azure Marketplace after October 2022. Existing SQL Server VMs deployed prior to
this date, or VMs with self-installed SQL Server instances, use the sysadmin model by
default and can enable the least privileged permissions model in the Azure portal.
To enable the least privilege permissions model, go to your SQL virtual machines
resource, choose Additional features under Settings and then check the box next to
SQL IaaS Agent extension least privilege mode:
The following table defines the permissions and custom roles used by each feature of
the extension:
SQL sysadmin
authentication
Deploying a SQL Server VM Azure Marketplace image through the Azure portal
automatically registers the SQL Server VM with the extension. However, if you choose to
self-install SQL Server on an Azure virtual machine, or provision an Azure virtual
machine from a custom VHD, then you must register your SQL Server VM with the SQL
IaaS Agent extension to unlock feature benefits. By default, self-installed Azure VMs with
SQL Server 2016 or later are automatically registered with the SQL IaaS Agent extension
when detected by the CEIP service. SQL Server VMs not detected by the CEIP should be
manually registered.
When you register with the SQL IaaS Agent extension, binaries are copied to the virtual
machine, but the agent is not installed by default. The agent will only be installed when
you enable one of the features that require it, and the following two services will then
run on the virtual machine:
Microsoft SQL Server IaaS agent is the main service for the SQL IaaS Agent
extension and should run under the Local System account.
Microsoft SQL Server IaaS Query Service is a helper service that helps the
extension run queries within SQL Server and should run under the NT Service
account NT Service\SqlIaaSExtensionQuery .
Registering your SQL Server VM with the SQL Server IaaS Agent extension creates the
SQL virtual machine resource within your subscription, which is a separate resource from
the virtual machine resource. Unregistering your SQL Server VM from the extension
removes the SQL virtual machine resource from your subscription but won't drop the
underlying virtual machine.
Multiple instance support
The SQL IaaS Agent extension only works on virtual machines with multiple instances if
there is a default instance. When you register your virtual machine with the SQL IaaS
Agent extension, it registers the default instance, and that's the instance you'll be able
to manage from the Azure portal.
The SQL IaaS Agent extension does not support virtual machines with multiple named
instances if there is no default instance.
To use a named instance of SQL Server, deploy an Azure virtual machine, install a single
named SQL Server instance to it, and then register it with the SQL IaaS Agent extension.
Alternatively, to use a named instance with an Azure Marketplace SQL Server image,
follow these steps:
If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.
Azure portal
Go to your Virtual machine resource in the Azure portal (not the SQL virtual
machines resource, but the resource for your VM). Select Extensions under Settings.
You should see the SqlIaasExtension extension listed, as in the following example:
Limitations
The SQL IaaS Agent extension only supports:
SQL Server VMs deployed through the Azure Resource Manager. SQL Server VMs
deployed through the classic model aren't supported.
SQL Server VMs deployed to the public or Azure Government cloud. Deployments
to other private or government clouds aren't supported.
SQL Server FCIs with limited functionality. SQL Server FCIs registered with the
extension do not support features that require the agent, such as automated
backup, patching, and advanced portal management.
VMs with a single named instance, or VMs with multiple named instances, if a
default instance exists.
SQL Server instance images only. The SQL IaaS Agent extension does not support
Reporting Services or Analysis services, such as the following images: SQL Server
Reporting Services, Power BI Report Server, SQL Server Analysis Services.
Privacy statements
When using SQL Server on Azure VMs and the SQL IaaS Agent extension, consider the
following privacy statements:
Automatic registration: By default, Azure VMs with SQL Server 2016 or later are
automatically registered with the SQL IaaS Agent extension when detected by the
CEIP service. Review the SQL Server privacy supplement for more information.
Data collection: The SQL IaaS Agent extension collects data for the express
purpose of giving customers optional benefits when using SQL Server on Azure
Virtual Machines. Microsoft will not use this data for licensing audits without the
customer's advance consent. See the SQL Server privacy supplement for more
information.
In-region data residency: SQL Server on Azure VMs and the SQL IaaS Agent
extension don't move or store customer data out of the region in which the VMs
are deployed.
Next steps
To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the articles for
Automatic installation, Single VMs, or VMs in bulk. For problem resolution, read
Troubleshoot known issues with the extension.
Applies to:
SQL Server on Azure VM
This quickstart steps through creating a SQL Server virtual machine (VM) in the Azure
portal. Follow the article to deploy either a conventional SQL Server on Azure VM, or
SQL Server deployed to an Azure confidential VM.
Tip
2. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in
the list, select All services, then type Azure SQL in the search box.
3. Select +Add to open the Select SQL deployment option page. You can view
additional information by selecting Show details on the SQL virtual machines tile.
4. For conventional SQL Server VMs, select one of the versions labeled Free SQL
Server License... from the drop-down. For confidential VMs, choose the SQL Server
2019 Enterprise on Windows Server 2022 Database Engine Only image from the
drop-down.
5. Select Create.
Conventional VM
To deploy a conventional SQL Server on Azure VM, on the Basics tab, provide the
following information:
1. In the Project Details section, select your Azure subscription and then select
Create new to create a new resource group. Type SQLVM-RG for the name.
1. Under Security & Networking, select Public (Internet) for SQL Connectivity and
change the port to 1401 to avoid using a well-known port number in the public
scenario.
2. Under SQL Authentication, select Enable. The SQL login credentials are set to the
same user name and password that you configured for the VM. Use the default
setting for Azure Key Vault integration. Storage configuration is not available for
the basic SQL Server VM image, but you can find more information about available
options for other images at storage configuration.
3. Change any other settings if needed, and then select Review + create.
Create the SQL Server VM
On the Review + create tab, review the summary, and select Create to create SQL
Server, resource group, and resources specified for this VM.
You can monitor the deployment from the Azure portal. The Notifications button at the
top of the screen shows basic status of the deployment. Deployment can take several
minutes.
3. In the Connect to Server or Connect to Database Engine dialog box, edit the
Server name value. Enter your VM's public IP address. Then add a comma and add
the custom port (1401) that you specified when you configured the new VM. For
example, 11.22.33.444,1401 .
7. Select Connect.
Log in to the VM remotely
Use the following steps to connect to the SQL Server virtual machine with Remote
Desktop:
1. After the Azure virtual machine is created and running, select Virtual machine, and
then choose your new VM.
2. Select Connect and then choose RDP from the drop-down to download your RDP
file.
3. Open the RDP file that your browser downloads for the VM.
4. The Remote Desktop Connection notifies you that the publisher of this remote
connection cannot be identified. Click Connect to continue.
5. In the Windows Security dialog, click Use a different account. You might have to
click More choices to see this. Specify the user name and password that you
configured when you created the VM. You must add a backslash before the user
name.
6. Click OK to connect.
After you connect to the SQL Server virtual machine, you can launch SQL Server
Management Studio and connect with Windows Authentication using your local
administrator credentials. If you enabled SQL Server Authentication, you can also
connect with SQL Authentication using the SQL login and password you configured
during provisioning.
Access to the machine enables you to directly change machine and SQL Server settings
based on your requirements. For example, you could configure the firewall settings or
change SQL Server configuration settings.
Clean up resources
If you do not need your SQL VM to run continually, you can avoid unnecessary charges
by stopping it when not in use. You can also permanently delete all resources associated
with the virtual machine by deleting its associated resource group in the portal. This
permanently deletes the virtual machine as well, so use this command with care. For
more information, see Manage Azure resources through portal.
Next steps
In this quickstart, you created a SQL Server virtual machine in the Azure portal. To learn
more about how to migrate your data to the new SQL Server, see the following article.
Applies to:
SQL Server on Azure VM
This quickstart steps through creating a SQL Server virtual machine (VM) with Azure
PowerShell.
Tip
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
Configure PowerShell
1. Open PowerShell and establish access to your Azure account by running the
Connect-AzAccount command.
PowerShell
Connect-AzAccount
2. When you see the sign-in window, enter your credentials. Use the same email and
password that you use to sign in to the Azure portal.
PowerShell
$ResourceGroupName = "sqlvm1"
PowerShell
PowerShell
PowerShell
2. Create a network security group. Configure rules to allow remote desktop (RDP)
and SQL Server connections.
PowerShell
-SecurityRules $NsgRuleRDP,$NsgRuleSQL
PowerShell
-NetworkSecurityGroupId $Nsg.Id
PowerShell
-AsPlainText -Force
2. Create a virtual machine configuration object and then create the VM. The
following command creates a SQL Server 2017 Developer Edition VM on Windows
Server 2016.
PowerShell
# Create the VM
Tip
PowerShell
mstsc /v:<publicIpAddress>
3. When prompted for credentials, choose to enter credentials for a different account.
Enter the username with a preceding backslash (for example, \azureadmin ), and
the password that you set previously in this quickstart.
2. In the Connect to Server dialog box, keep the defaults. The server name is the
name of the VM. Authentication is set to Windows Authentication. Select
Connect.
You're now connected to SQL Server locally. If you want to connect remotely, you must
configure connectivity from the Azure portal or manually.
Clean up resources
If you don't need the VM to run continuously, you can avoid unnecessary charges by
stopping it when not in use. The following command stops the VM but leaves it
available for future use.
PowerShell
You can also permanently delete all resources associated with the virtual machine with
the Remove-AzResourceGroup command. Doing so permanently deletes the virtual
machine as well, so use this command with care.
Next steps
In this quickstart, you created a SQL Server 2017 virtual machine using Azure PowerShell.
To learn more about how to migrate your data to the new SQL Server, see the following
article.
This quickstart shows you how to use Bicep to create an SQL Server on Azure Virtual
Machine (VM).
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure
resources. It provides concise syntax, reliable type safety, and support for code reuse.
Bicep offers the best authoring experience for your infrastructure-as-code solutions in
Azure.
Prerequisites
The SQL Server VM Bicep file requires the following:
Bicep
@allowed([
'sql2019-ws2019'
'sql2017-ws2019'
'sql2019-ws2022'
'SQL2016SP1-WS2016'
'SQL2016SP2-WS2016'
'SQL2014SP3-WS2012R2'
'SQL2014SP2-WS2012R2'
])
@allowed([
'standard-gen2'
'enterprise-gen2'
'SQLDEV-gen2'
'web-gen2'
'enterprisedbengineonly-gen2'
])
@secure()
@allowed([
'General'
'OLTP'
'DW'
])
@minValue(1)
@maxValue(8)
@description('Path for SQL Data files. Please choose drive letter from F to
Z, and other drives from A to E are reserved for system')
@minValue(1)
@maxValue(8)
@description('Path for SQL Log files. Please choose drive letter from F to Z
and different than the one used for SQL data. Drive letter from A to E are
reserved for system')
@allowed([
'Standard'
'TrustedLaunch'
])
var securityProfileJson = {
uefiSettings: {
secureBootEnabled: true
vTpmEnabled: true
securityType: securityType
var networkSecurityGroupRules = [
name: 'RDP'
properties: {
priority: 300
protocol: 'Tcp'
access: 'Allow'
direction: 'Inbound'
sourceAddressPrefix: '*'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '3389'
var dataDisks = {
createOption: 'Empty'
caching: 'ReadOnly'
writeAcceleratorEnabled: false
storageAccountType: 'Premium_LRS'
diskSizeGB: 1023
name: publicIpAddressName
location: location
sku: {
name: publicIpAddressSku
properties: {
publicIPAllocationMethod: publicIpAddressType
name: networkSecurityGroupName
location: location
properties: {
securityRules: networkSecurityGroupRules
name: networkInterfaceName
location: location
properties: {
ipConfigurations: [
name: 'ipconfig1'
properties: {
subnet: {
id: subnetRef
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIpAddress.id
enableAcceleratedNetworking: true
networkSecurityGroup: {
id: nsgId
name: virtualMachineName
location: location
properties: {
hardwareProfile: {
vmSize: virtualMachineSize
storageProfile: {
createOption: dataDisks.createOption
writeAcceleratorEnabled: dataDisks.writeAcceleratorEnabled
diskSizeGB: dataDisks.diskSizeGB
managedDisk: {
storageAccountType: dataDisks.storageAccountType
}]
osDisk: {
createOption: 'FromImage'
managedDisk: {
storageAccountType: 'Premium_LRS'
imageReference: {
publisher: 'MicrosoftSQLServer'
offer: imageOffer
sku: sqlSku
version: 'latest'
networkProfile: {
networkInterfaces: [
id: networkInterface.id
osProfile: {
computerName: virtualMachineName
adminUsername: adminUsername
adminPassword: adminPassword
windowsConfiguration: {
enableAutomaticUpdates: true
provisionVMAgent: true
resource virtualMachineName_extension
'Microsoft.Compute/virtualMachines/extensions@2022-03-01' = if
((securityType == 'TrustedLaunch') &&
((securityProfileJson.uefiSettings.secureBootEnabled == true) &&
(securityProfileJson.uefiSettings.vTpmEnabled == true))) {
parent: virtualMachine
name: extensionName
location: location
properties: {
publisher: extensionPublisher
type: extensionName
typeHandlerVersion: extensionVersion
autoUpgradeMinorVersion: true
enableAutomaticUpgrade: true
settings: {
AttestationConfig: {
MaaSettings: {
maaEndpoint: ''
maaTenantName: maaTenantName
AscSettings: {
ascReportingEndpoint: ''
ascReportingFrequency: ''
useCustomToken: 'false'
disableAlerts: 'false'
resource Microsoft_SqlVirtualMachine_sqlVirtualMachines_virtualMachine
'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-07-01-preview' = {
name: virtualMachineName
location: location
properties: {
virtualMachineResourceId: virtualMachine.id
sqlManagement: 'Full'
sqlServerLicenseType: 'PAYG'
storageConfigurationSettings: {
diskConfigurationType: diskConfigurationType
storageWorkloadType: storageWorkloadType
sqlDataSettings: {
luns: dataDisksLuns
defaultFilePath: dataPath
}
sqlLogSettings: {
luns: logDisksLuns
defaultFilePath: logPath
sqlTempDbSettings: {
defaultFilePath: tempDbPath
2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
CLI
Azure CLI
Make sure to replace the resource group name, exampleRG, with the name of your pre-
configured resource group.
7 Note
When the deployment finishes, you should see a message indicating the
deployment succeeded.
CLI
Azure CLI
Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete
the resource group and its resources.
CLI
Azure CLI
Next steps
For a step-by-step tutorial that guides you through the process of creating a Bicep file
with Visual Studio Code, see:
Azure portal
PowerShell
Use this Azure Resource Manager template (ARM template) to deploy a SQL Server on
Azure Virtual Machine (VM).
An ARM template is a JavaScript Object Notation (JSON) file that defines the
infrastructure and configuration for your project. The template uses declarative syntax.
In declarative syntax, you describe your intended deployment without writing the
sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM
templates, select the Deploy to Azure button. The template will open in the Azure
portal.
Prerequisites
The SQL Server VM ARM template requires the following:
JSON
"$schema": "https://schema.management.azure.com/schemas/2019-04-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.17.1.54307",
"templateHash": "3407567292495018002"
},
"parameters": {
"virtualMachineName": {
"type": "string",
"defaultValue": "myVM",
"metadata": {
},
"virtualMachineSize": {
"type": "string",
"defaultValue": "Standard_D8s_v3",
"metadata": {
},
"existingVirtualNetworkName": {
"type": "string",
"metadata": {
},
"existingVnetResourceGroup": {
"type": "string",
"defaultValue": "[resourceGroup().name]",
"metadata": {
},
"existingSubnetName": {
"type": "string",
"metadata": {
},
"imageOffer": {
"type": "string",
"defaultValue": "sql2019-ws2022",
"allowedValues": [
"sql2019-ws2019",
"sql2017-ws2019",
"sql2019-ws2022",
"SQL2016SP1-WS2016",
"SQL2016SP2-WS2016",
"SQL2014SP3-WS2012R2",
"SQL2014SP2-WS2012R2"
],
"metadata": {
},
"sqlSku": {
"type": "string",
"defaultValue": "standard-gen2",
"allowedValues": [
"standard-gen2",
"enterprise-gen2",
"SQLDEV-gen2",
"web-gen2",
"enterprisedbengineonly-gen2"
],
"metadata": {
},
"adminUsername": {
"type": "string",
"metadata": {
},
"adminPassword": {
"type": "securestring",
"metadata": {
},
"storageWorkloadType": {
"type": "string",
"defaultValue": "General",
"allowedValues": [
"General",
"OLTP",
"DW"
],
"metadata": {
},
"sqlDataDisksCount": {
"type": "int",
"defaultValue": 1,
"maxValue": 8,
"minValue": 1,
"metadata": {
"description": "Amount of data disks (1TB each) for SQL Data files"
},
"dataPath": {
"type": "string",
"defaultValue": "F:\\SQLData",
"metadata": {
"description": "Path for SQL Data files. Please choose drive letter
from F to Z, and other drives from A to E are reserved for system"
},
"sqlLogDisksCount": {
"type": "int",
"defaultValue": 1,
"maxValue": 8,
"minValue": 1,
"metadata": {
"description": "Amount of data disks (1TB each) for SQL Log files"
},
"logPath": {
"type": "string",
"defaultValue": "G:\\SQLLog",
"metadata": {
"description": "Path for SQL Log files. Please choose drive letter
from F to Z and different than the one used for SQL data. Drive letter from
A to E are reserved for system"
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
},
"secureBoot": {
"type": "bool",
"defaultValue": true,
"metadata": {
},
"vTPM": {
"type": "bool",
"defaultValue": true,
"metadata": {
},
"variables": {
"networkInterfaceName": "[format('{0}-nic',
parameters('virtualMachineName'))]",
"networkSecurityGroupName": "[format('{0}-nsg',
parameters('virtualMachineName'))]",
"networkSecurityGroupRules": [
"name": "RDP",
"properties": {
"priority": 300,
"protocol": "Tcp",
"access": "Allow",
"direction": "Inbound",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*",
"destinationPortRange": "3389"
],
"publicIpAddressName": "[format('{0}-publicip-{1}',
parameters('virtualMachineName'),
uniqueString(parameters('virtualMachineName')))]",
"publicIpAddressType": "Dynamic",
"publicIpAddressSku": "Basic",
"diskConfigurationType": "NEW",
"nsgId": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]",
"subnetRef": "[resourceId(parameters('existingVnetResourceGroup'),
'Microsoft.Network/virtualNetWorks/subnets',
parameters('existingVirtualNetworkName'),
parameters('existingSubnetName'))]",
"logDisksLuns": "[range(parameters('sqlDataDisksCount'),
parameters('sqlLogDisksCount'))]",
"dataDisks": {
"createOption": "Empty",
"caching": "ReadOnly",
"writeAcceleratorEnabled": false,
"storageAccountType": "Premium_LRS",
"diskSizeGB": 1023
},
"tempDbPath": "D:\\SQLTemp",
"extensionName": "GuestAttestation",
"extensionPublisher": "Microsoft.Azure.Security.WindowsAttestation",
"extensionVersion": "1.0",
"maaTenantName": "GuestAttestation"
},
"resources": [
"type": "Microsoft.Network/publicIPAddresses",
"apiVersion": "2022-01-01",
"name": "[variables('publicIpAddressName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[variables('publicIpAddressSku')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIpAddressType')]"
},
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2022-01-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": "[variables('networkSecurityGroupRules')]"
},
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2022-01-01",
"name": "[variables('networkInterfaceName')]",
"location": "[parameters('location')]",
"properties": {
"ipConfigurations": [
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',
variables('publicIpAddressName'))]"
],
"enableAcceleratedNetworking": true,
"networkSecurityGroup": {
"id": "[variables('nsgId')]"
},
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]",
"[resourceId('Microsoft.Network/publicIPAddresses',
variables('publicIpAddressName'))]"
},
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2022-03-01",
"name": "[parameters('virtualMachineName')]",
"location": "[parameters('location')]",
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"storageProfile": {
"copy": [
"name": "dataDisks",
"input": {
"createOption": "[variables('dataDisks').createOption]",
"caching": "[if(greaterOrEquals(range(0,
add(parameters('sqlDataDisksCount'), parameters('sqlLogDisksCount')))
[range(0, length(range(0, add(parameters('sqlDataDisksCount'),
parameters('sqlLogDisksCount')))))[copyIndex('dataDisks')]],
parameters('sqlDataDisksCount')), 'None', variables('dataDisks').caching)]",
"writeAcceleratorEnabled": "
[variables('dataDisks').writeAcceleratorEnabled]",
"diskSizeGB": "[variables('dataDisks').diskSizeGB]",
"managedDisk": {
"storageAccountType": "
[variables('dataDisks').storageAccountType]"
],
"osDisk": {
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"imageReference": {
"publisher": "MicrosoftSQLServer",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('sqlSku')]",
"version": "latest"
},
"networkProfile": {
"networkInterfaces": [
"id": "[resourceId('Microsoft.Network/networkInterfaces',
variables('networkInterfaceName'))]"
},
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"enableAutomaticUpdates": true,
"provisionVMAgent": true
},
"securityProfile": {
"uefiSettings": {
"secureBootEnabled": "[parameters('secureBoot')]",
"vTpmEnabled": "[parameters('vTPM')]"
},
"securityType": "TrustedLaunch"
},
"dependsOn": [
"[resourceId('Microsoft.Network/networkInterfaces',
variables('networkInterfaceName'))]"
},
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2022-03-01",
"location": "[parameters('location')]",
"properties": {
"publisher": "[variables('extensionPublisher')]",
"type": "[variables('extensionName')]",
"typeHandlerVersion": "[variables('extensionVersion')]",
"autoUpgradeMinorVersion": true,
"enableAutomaticUpgrade": true,
"settings": {
"AttestationConfig": {
"MaaSettings": {
"maaEndpoint": "",
"maaTenantName": "[variables('maaTenantName')]"
},
"AscSettings": {
"ascReportingEndpoint": "",
"ascReportingFrequency": ""
},
"useCustomToken": "false",
"disableAlerts": "false"
},
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]"
},
"type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
"apiVersion": "2022-07-01-preview",
"name": "[parameters('virtualMachineName')]",
"location": "[parameters('location')]",
"properties": {
"virtualMachineResourceId": "
[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]",
"sqlManagement": "Full",
"sqlServerLicenseType": "PAYG",
"storageConfigurationSettings": {
"diskConfigurationType": "[variables('diskConfigurationType')]",
"storageWorkloadType": "[parameters('storageWorkloadType')]",
"sqlDataSettings": {
"luns": "[variables('dataDisksLuns')]",
"defaultFilePath": "[parameters('dataPath')]"
},
"sqlLogSettings": {
"luns": "[variables('logDisksLuns')]",
"defaultFilePath": "[parameters('logPath')]"
},
"sqlTempDbSettings": {
"defaultFilePath": "[variables('tempDbPath')]"
},
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]"
],
"outputs": {
"adminUsername": {
"type": "string",
"value": "[parameters('adminUsername')]"
More SQL Server on Azure VM templates can be found in the quickstart template
gallery .
3. Select Review + create. After the SQL Server VM has been deployed successfully,
you get a notification.
The Azure portal is used to deploy the template. In addition to the Azure portal, you can
also use Azure PowerShell, the Azure CLI, and REST API. To learn other deployment
methods, see Deploy templates.
Azure CLI
echo "Enter the resource group where your SQL Server VM exists:" &&
Clean up resources
When no longer needed, delete the resource group by using Azure CLI or Azure
PowerShell:
CLI
Azure CLI
Next steps
For a step-by-step tutorial that guides you through the process of creating a template,
see:
Azure portal
PowerShell
Applies to:
SQL Server on Azure VM
Business continuity means continuing your business in the event of a disaster, planning
for recovery, and ensuring that your data is highly available. SQL Server on Azure Virtual
Machines can help lower the cost of a high-availability and disaster recovery (HADR)
database solution.
Most SQL Server HADR solutions are supported on virtual machines (VMs), as both
Azure-only and hybrid solutions. In an Azure-only solution, the entire HADR system runs
in Azure. In a hybrid configuration, part of the solution runs in Azure and the other part
runs on-premises in your organization. The flexibility of the Azure environment enables
you to move partially or completely to Azure to satisfy the budget and HADR
requirements of your SQL Server database systems.
This article compares and contrasts the business continuity solutions available for SQL
Server on Azure VMs.
Overview
It's up to you to ensure that your database system has the HADR capabilities that the
service-level agreement (SLA) requires. The fact that Azure provides high-availability
mechanisms, such as service healing for cloud services and failure recovery detection for
virtual machines, does not itself guarantee that you can meet the SLA. Although these
mechanisms help protect the high availability of the virtual machine, they don't protect
the availability of SQL Server running inside the VM.
It's possible for the SQL Server instance to fail while the VM is online and healthy. Even
the high-availability mechanisms provided by Azure allow for downtime of the VMs due
to events like recovery from software or hardware failures and operating system
upgrades.
It's now possible to lift and shift both your failover cluster instance and availability
group solution to SQL Server on Azure VMs using Azure Migrate.
Deployment architectures
Azure supports these SQL Server technologies for business continuity:
You can combine the technologies to implement a SQL Server solution that has both
high-availability and disaster recovery capabilities. Depending on the technology that
you use, a hybrid deployment might require a VPN tunnel with the Azure virtual
network. The following sections show you some example deployment architectures.
Availability Availability replicas running in Azure VMs in the same region provide high
groups availability. You need to configure a domain controller VM, because Windows
failover clustering requires an Active Directory domain.
For higher redundancy and availability, the Azure VMs can be deployed in different
availability zones as documented in the availability group overview.
Failover Failover cluster instances are supported on SQL Server VMs. Because the FCI
cluster feature requires shared storage, five solutions will work with SQL Server on Azure
instances VMs:
- Using Azure shared disks for Windows Server 2019. Shared managed disks are an
Azure product that allows attaching a managed disk to multiple virtual machines
simultaneously. VMs in the cluster can read or write to your attached disk based on
the reservation chosen by the clustered application through SCSI Persistent
Reservations (SCSI PR). SCSI PR is an industry-standard storage solution that's used
by applications running on a storage area network (SAN) on-premises. Enabling
SCSI PR on a managed disk allows you to migrate these applications to Azure as is.
- Using Storage Spaces Direct (S2D) to provide a software-based virtual SAN for
Windows Server 2016 and later.
- Using a Premium file share for Windows Server 2012 and later. Premium file
shares are SSD backed, have consistently low latency, and are fully supported for
use with FCI.
- Using shared block storage for a remote iSCSI target via Azure ExpressRoute. For
example, NetApp Private Storage (NPS) exposes an iSCSI target via ExpressRoute
with Equinix to Azure VMs.
For shared storage and data replication solutions from Microsoft partners, contact
the vendor for any issues related to accessing data on failover.
Availability Availability replicas running across multiple datacenters in Azure VMs for disaster
groups recovery. This cross-region solution helps protect against a complete site outage.
Within a region, all replicas should be within the same cloud service and the same
virtual network. Because each region will have a separate virtual network, these
solutions require network-to-network connectivity. For more information, see
Configure a network-to-network connection by using the Azure portal. For detailed
instructions, see Configure a SQL Server Always On availability group across
different Azure regions.
Database Principal and mirror and servers running in different datacenters for disaster
mirroring recovery. You must deploy them by using server certificates. SQL Server database
mirroring is not supported for SQL Server 2008 or SQL Server 2008 R2 on an Azure
VM.
Backup and Production databases backed up directly to Blob storage in a different datacenter
restore with for disaster recovery.
Azure Blob
storage
For more information, see Backup and restore for SQL Server on Azure VMs.
Replicate Production SQL Server instance in one Azure datacenter replicated directly to
and fail Azure Storage in a different Azure datacenter for disaster recovery.
over SQL
Server to
Azure with
Azure Site
Recovery
For more information, see Protect SQL Server using SQL Server disaster recovery
and Azure Site Recovery.
Availability Some availability replicas running in Azure VMs and other replicas running
groups on-premises for cross-site disaster recovery. The production site can be either
on-premises or in an Azure datacenter.
Because all availability replicas must be in the same failover cluster, the
cluster must span both networks (a multi-subnet failover cluster). This
configuration requires a VPN connection between Azure and the on-premises
network.
For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site. To get started, review
theavailability group tutorial.
Technology Example Architectures
Database One partner running in an Azure VM and the other running on-premises for
mirroring cross-site disaster recovery by using server certificates. Partners don't need to
be in the same Active Directory domain, and no VPN connection is required.
For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site. SQL Server database
mirroring is not supported for SQL Server 2008 or SQL Server 2008 R2 on an
Azure VM.
Log shipping One server running in an Azure VM and the other running on-premises for
cross-site disaster recovery. Log shipping depends on Windows file sharing,
so a VPN connection between the Azure virtual network and the on-premises
network is required.
For successful disaster recovery of your databases, you should also install a
replica domain controller at the disaster recovery site.
Technology Example Architectures
Backup and On-premises production databases backed up directly to Azure Blob storage
restore with for disaster recovery.
Azure Blob
storage
For more information, see Backup and restore for SQL Server on Azure Virtual
Machines.
Replicate and fail On-premises production SQL Server instance replicated directly to Azure
over SQL Server Storage for disaster recovery.
to Azure with
Azure Site
Recovery
For more information, see Protect SQL Server using SQL Server disaster
recovery and Azure Site Recovery.
For example, you can have two free passive secondaries when all three replicas are
hosted in Azure:
Or you can configure a hybrid failover environment, with a licensed primary on-
premises, one free passive for HA, one free passive for DR on-premises, and one free
passive for DR in Azure:
For more information, see the product licensing terms .
To enable this benefit, go to your SQL Server virtual machine resource. Select Configure
under Settings, and then choose the HA/DR option under SQL Server License. Select
the check box to confirm that this SQL Server VM will be used as a passive replica, and
then select Apply to save your settings. Note that when all three replicas are hosted in
Azure, pay-as-you-go customers are also entitled to use the HA/DR license type.
To configure a high-availability setup, place all participating SQL Server virtual machines
in the same availability set to avoid application or data loss during a maintenance event.
Only nodes in the same cloud service can participate in the same availability set. For
more information, see Manage the availability of virtual machines.
To configure high availability, place participating SQL Server virtual machines spread
across availability zones in the region. There will be additional charges for network-to-
network transfers between availability zones. For more information, see Availability
zones.
Geo-replication support
Geo-replication in Azure disks does not support the data file and log file of the same
database to be stored on separate disks. GRS replicates changes on each disk
independently and asynchronously. This mechanism guarantees the write order within a
single disk on the geo-replicated copy, but not across geo-replicated copies of multiple
disks. If you configure a database to store its data file and its log file on separate disks,
the recovered disks after a disaster might contain a more up-to-date copy of the data
file than the log file, which breaks the write-ahead log in SQL Server and the ACID
properties (atomicity, consistency, isolation, and durability) of transactions.
If you don't have the option to disable geo-replication on the storage account, keep all
data and log files for a database on the same disk. If you must use more than one disk
due to the size of the database, deploy one of the disaster recovery solutions listed
earlier to ensure data redundancy.
Next steps
Decide if an availability group or a failover cluster instance is the best business
continuity solution for your business. Then review the best practices for configuring your
environment for high availability and disaster recovery.
Backup and restore for SQL Server on
Azure VMs
Article • 06/27/2023
Applies to:
SQL Server on Azure VM
This article provides guidance on the backup and restore options available for SQL
Server running on a Windows virtual machine (VM) in Azure. Azure Storage maintains
three copies of every Azure VM disk to guarantee protection against data loss or
physical data corruption. Thus, unlike SQL Server on-premises, you don't need to focus
on hardware failures. However, you should still back up your SQL Server databases to
protect against application or user errors, such as inadvertent data insertions or
deletions. In this situation, it is important to be able to restore to a specific point in time.
The first part of this article provides an overview of the available backup and restore
options. This is followed by sections that provide more information on each strategy.
Automated 2014 Automated Backup allows you to schedule regular backups for all
Backup and later databases on a SQL Server VM. Backups are stored in Azure storage for
up to 30 days. Beginning with SQL Server 2016, Automated Backup
offers additional options such as configuring manual scheduling and the
frequency of full and log backups.
Azure 2008 Azure Backup provides an Enterprise class backup capability for SQL
Backup for and later Server on Azure VMs. With this service, you can centrally manage
SQL VMs backups for multiple servers and thousands of databases. Databases can
be restored to a specific point in time in the portal. It offers a
customizable retention policy that can maintain backups for years.
Manual All Depending on your version of SQL Server, there are various techniques
backup to manually backup and restore SQL Server on Azure VM. In this
scenario, you are responsible for how your databases are backed up and
the storage location and management of these backups.
The following sections describe each option in more detail. The final section of this
article provides a summary in the form of a feature matrix.
Automated Backup
Automated Backup provides an automatic backup service for SQL Server Standard and
Enterprise editions running on a Windows VM in Azure. This service is provided by the
SQL Server IaaS Agent Extension, which is automatically installed on SQL Server
Windows virtual machine images in the Azure portal.
All databases are backed up to an Azure storage account that you configure. Backups
can be encrypted and retained for up to 90 days.
SQL Server 2016 and higher VMs offer more customization options with Automated
Backup. These improvements include:
To restore a database, you must locate the required backup file(s) in the storage account
and perform a restore on your SQL VM using SQL Server Management Studio (SSMS) or
Transact-SQL commands.
For more information on how to configure Automated Backup for SQL VMs, see one of
the following articles:
SQL Server 2016 and later: Automated Backup for Azure Virtual Machines
SQL Server 2014: Automated Backup for SQL Server 2014 Virtual Machines
This Azure Backup solution for SQL VMs is generally available. For more information, see
Back up SQL Server database to Azure.
Manual backup
If you want to manually manage backup and restore operations on your SQL VMs, there
are several options depending on the version of SQL Server you are using. For an
overview of backup and restore, see one of the following articles based on your version
of SQL Server:
The following sections describe several manual backup and restore options in more
detail.
Backup to URL
Beginning with SQL Server 2012 SP1 CU2, you can back up and restore directly to
Microsoft Azure Blob storage, which is also known as backup to URL. SQL Server 2016
also introduced the following enhancements for this feature:
2016 Details
enhancement
Striping When backing up to Microsoft Azure Blob Storage, SQL Server 2016 supports
backing up to multiple blobs to enable backing up large databases, up to a
maximum of 12.8 TB.
Snapshot Through the use of Azure snapshots, SQL Server File-Snapshot Backup provides
Backup nearly instantaneous backups and rapid restores for database files stored using
Azure Blob Storage. This capability enables you to simplify your backup and
restore policies. File-snapshot backup also supports point in time restore. For
more information, see Snapshot Backups for Database Files in Azure.
For more information, see the one of the following articles based on your version of SQL
Server:
Managed Backup
Beginning with SQL Server 2014, Managed Backup automates the creation of backups to
Azure storage. Behind the scenes, Managed Backup makes use of the Backup to URL
feature described in the previous section of this article. Managed Backup is also the
underlying feature that supports the SQL Server VM Automated Backup service.
Beginning in SQL Server 2016, Managed Backup got additional options for scheduling,
system database backup, and full and log backup frequency.
For more information, see one of the following articles based on your version of SQL
Server:
Managed Backup to Microsoft Azure for SQL Server 2016 and later
Managed Backup to Microsoft Azure for SQL Server 2014
Decision matrix
The following table summarizes the capabilities of each backup and restore option for
SQL Server virtual machines in Azure.
Point-in-time restore
Although backup and restore can be used to migrate your data, there are potentially
easier data migration paths to SQL Server on VM. For a full discussion of migration
options and recommendations, see Migration guide: SQL Server to SQL Server on Azure
Virtual Machines.
Use Azure Storage for SQL Server
backup and restore
Article • 03/01/2023
Applies to:
SQL Server on Azure VM
Starting with SQL Server 2012 SP1 CU2, you can now write back up SQL Server
databases directly to Azure Blob storage. Use this functionality to back up to and restore
from Azure Blob storage. Back up to the cloud offers benefits of availability, limitless
geo-replicated off-site storage, and ease of migration of data to and from the cloud.
You can issue BACKUP or RESTORE statements by using Transact-SQL or SMO.
Overview
SQL Server 2016 introduces new capabilities; you can use file-snapshot backup to
perform nearly instantaneous backups and incredibly quick restores.
This topic explains why you might choose to use Azure Storage for SQL Server backups
and then describes the components involved. You can use the resources provided at the
end of the article to access walk-throughs and additional information to start using this
service with your SQL Server backups.
Ease of use: Storing your backups in Azure blobs can be a convenient, flexible, and
easy to access off-site option. Creating off-site storage for your SQL Server
backups can be as easy as modifying your existing scripts/jobs to use the BACKUP
TO URL syntax. Off-site storage should typically be far enough from the
production database location to prevent a single disaster that might impact both
the off-site and production database locations. By choosing to geo-replicate your
Azure blobs, you have an extra layer of protection in the event of a disaster that
could affect the whole region.
Backup archive: Azure Blob storage offers a better alternative to the often used
tape option to archive backups. Tape storage might require physical transportation
to an off-site facility and measures to protect the media. Storing your backups in
Azure Blob storage provides an instant, highly available, and a durable archiving
option.
Managed hardware: There is no overhead of hardware management with Azure
services. Azure services manage the hardware and provide geo-replication for
redundancy and protection against hardware failures.
Unlimited storage: By enabling a direct backup to Azure blobs, you have access to
virtually unlimited storage. Alternatively, backing up to an Azure virtual machine
disk has limits based on machine size. There is a limit to the number of disks you
can attach to an Azure virtual machine for backups. This limit is 16 disks for an
extra large instance and fewer for smaller instances.
Backup availability: Backups stored in Azure blobs are available from anywhere
and at any time and can easily be accessed for restores to a SQL Server instance,
without the need for database attach/detach or downloading and attaching the
VHD.
Cost: Pay only for the service that is used. Can be cost-effective as an off-site and
backup archive option. See the Azure pricing calculator , and the Azure Pricing
article for more information.
Storage snapshots: When database files are stored in an Azure blob and you are
using SQL Server 2016, you can use file-snapshot backup to perform nearly
instantaneous backups and incredibly quick restores.
For more details, see SQL Server Backup and Restore with Azure Blob storage.
The following two sections introduce Azure Blob storage, including the required SQL
Server components. It is important to understand the components and their interaction
to successfully use backup and restore from Azure Blob storage.
Component Description
Storage The storage account is the starting point for all storage services. To access Azure
account Blob storage, first create an Azure Storage account. SQL Server is agnostic to the
type of storage redundancy used. Backup to Page blobs and block blobs is
supported for every storage redundancy (LRS\ZRS\GRS\RA-GRS\RA-GZRS\etc.).
For more information about Azure Blob storage, see How to use Azure Blob
storage.
Component Description
Container A container provides a grouping of a set of blobs, and can store an unlimited
number of Blobs. To write a SQL Server backup to Azure Blob storage, you must
have at least the root container created.
Blob A file of any type and size. Blobs are addressable using the following URL format:
https://<storageaccount>.blob.core.windows.net/<container>/<blob> . For more
information about page Blobs, see Understanding Block and Page Blobs
Component Description
URL A URL specifies a Uniform Resource Identifier (URI) to a unique backup file. The
URL provides the location and name of the SQL Server backup file. The URL must
point to an actual blob, not just a container. If the blob does not exist, Azure
creates it. If an existing blob is specified, the backup command fails, unless the
WITH FORMAT option is specified. The following is an example of the URL you would
specify in the BACKUP command:
https://<storageaccount>.blob.core.windows.net/<container>/<FILENAME.bak> .
Credential The information that is required to connect and authenticate to Azure Blob storage
is stored as a credential. In order for SQL Server to write backups to an Azure Blob
or restore from it, a SQL Server credential must be created. For more information,
see SQL Server Credential.
7 Note
SQL Server 2016 has been updated to support block blobs. Please see Tutorial: Use
Microsoft Azure Blob Storage with SQL Server databases for more details.
Next steps
1. Create an Azure account if you don't already have one. If you are evaluating Azure,
consider the free trial .
2. Then go through one of the following tutorials that walk you through creating a
storage account and performing a restore.
SQL Server 2014: Tutorial: SQL Server 2014 Backup and Restore to Microsoft
Azure Blob storage.
SQL Server 2016: Tutorial: Using the Microsoft Azure Blob Storage with SQL
Server databases
3. Review additional documentation starting with SQL Server Backup and Restore
with Microsoft Azure Blob storage.
If you have any problems, review the topic SQL Server Backup to URL Best Practices and
Troubleshooting.
For other SQL Server backup and restore options, see Backup and Restore for SQL
Server on Azure Virtual Machines.
Always On availability group on SQL
Server on Azure VMs
Article • 03/30/2023
Applies to:
SQL Server on Azure VM
This article introduces Always On availability groups (AG) for SQL Server on Azure Virtual
Machines (VMs).
Overview
Always On availability groups on Azure Virtual Machines are similar to Always On
availability groups on-premises, and rely on the underlying Windows Server Failover
Cluster. However, since the virtual machines are hosted in Azure, there are a few
additional considerations as well, such as VM redundancy, and routing traffic on the
Azure network.
The following diagram illustrates an availability group for SQL Server on Azure VMs:
7 Note
It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs using Azure Migrate. See Migrate availability group to learn more.
VM redundancy
To increase redundancy and high availability, SQL Server VMs should either be in the
same availability set, or different availability zones.
Placing a set of VMs in the same availability set protects from outages within a data
center caused by equipment failure (VMs within an Availability Set don't share
resources) or from updates (VMs within an availability set aren't updated at the same
time).
Availability Zones protect against the failure of an entire data center, with each Zone
representing a set of data centers within a region. By ensuring resources are placed in
different Availability Zones, no data center-level outage can take all of your VMs offline.
When creating Azure VMs, you must choose between configuring Availability Sets vs
Availability Zones. An Azure VM can't participate in both.
While Availability Zones may provide better availability than Availability Sets (99.99% vs
99.95%), performance should also be a consideration. VMs within an Availability Set can
be placed in a proximity placement group which guarantees they're close to each other,
minimizing network latency between them. VMs located in different Availability Zones
have greater network latency between them, which can increase the time it takes to
synchronize data between the primary and secondary replica(s). This may cause delays
on the primary replica as well as increase the chance of data loss in the event of an
unplanned failover. It's important to test the proposed solution under load and ensure
that it meets SLAs for both performance and availability.
Connectivity
To match the on-premises experience for connecting to your availability group listener,
deploy your SQL Server VMs to multiple subnets within the same virtual network.
Having multiple subnets negates the need for the extra dependency on an Azure Load
Balancer, or a distributed network name (DNN) to route your traffic to your listener.
If you deploy your SQL Server VMs to a single subnet, you can configure a virtual
network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN)
to route traffic to your availability group listener. Review the differences between the
two and then deploy either a distributed network name (DNN) or a virtual network
name (VNN) for your availability group.
Most SQL Server features work transparently with availability groups when using the
DNN, but there are certain features that may require special consideration. See AG and
DNN interoperability to learn more.
Additionally, there are some behavior differences between the functionality of the VNN
listener and DNN listener that are important to note:
Failover time: Failover time is faster when using a DNN listener since there's no
need to wait for the network load balancer to detect the failure event and change
its routing.
Existing connections: Connections made to a specific database within a failing-over
availability group will close, but other connections to the primary replica will
remain open since the DNN stays online during the failover process. This is
different than a traditional VNN environment where all connections to the primary
replica typically close when the availability group fails over, the listener goes
offline, and the primary replica transitions to the secondary role. When using a
DNN listener, you may need to adjust application connection strings to ensure that
connections are redirected to the new primary replica upon failover.
Open transactions: Open transactions against a database in a failing-over
availability group will close and roll back, and you need to manually reconnect. For
example, in SQL Server Management Studio, close the query window and open a
new one.
Setting up a VNN listener in Azure requires a load balancer. There are two main options
for load balancers in Azure: external (public) or internal. The external (public) load
balancer is internet-facing and is associated with a public virtual IP that's accessible over
the internet. An internal load balancer supports only clients within the same virtual
network. For either load balancer type, you must enable Direct Server Return.
You can still connect to each availability replica separately by connecting directly to the
service instance. Also, because availability groups are backward compatible with
database mirroring clients, you can connect to the availability replicas like database
mirroring partners as long as the replicas are configured similarly to database mirroring:
The following is an example client connection string that corresponds to this database
mirroring-like configuration using ADO.NET or SQL Server Native Client:
Console
Data Source=ReplicaServer1;Failover Partner=ReplicaServer2;Initial
Catalog=AvailabilityDatabase;
For security reasons, broadcasting on any public cloud (Azure, Google, AWS) isn't
allowed, so the uses of ARPs and GARPs on Azure isn't supported. To overcome this
difference in networking environments, SQL Server VMs in a single subnet availability
group rely on load balancers to route traffic to the appropriate IP addresses. Load
balancers are configured with a frontend IP address that corresponds to the listener and
a probe port is assigned so that the Azure Load Balancer periodically polls for the status
of the replicas in the availability group. Since only the primary replica SQL Server VM
responds to the TCP probe, incoming traffic is then routed to the VM that successfully
responds to the probe. Additionally, the corresponding probe port is configured as the
WSFC cluster IP, ensuring the Primary replica responds to the TCP probe.
Availability groups configured in a single subnet must either use a load balancer or
distributed network name (DNN) to route traffic to the appropriate replica. To avoid
these dependencies, configure your availability group in multiple subnets so the
availability group listener is configured with an IP address for a replica in each subnet,
and can route traffic appropriately.
If you've already created your availability group in a single subnet, you can migrate it to
a multi-subnet environment.
Lease mechanism
For SQL Server, the AG resource DLL determines the health of the AG based on the AG
lease mechanism and Always On health detection. The AG resource DLL exposes
resource health through the IsAlive operation. The resource monitor polls IsAlive at the
cluster heartbeat interval, which is set by the CrossSubnetDelay and SameSubnetDelay
cluster-wide values. On a primary node, the cluster service initiates failover whenever the
IsAlive call to the resource DLL returns that the AG isn't healthy.
The AG resource DLL monitors the status of internal SQL Server components.
Sp_server_diagnostics reports the health of these components to SQL Server on an
interval controlled by HealthCheckTimeout.
Unlike other failover mechanisms, the SQL Server instance plays an active role in the
lease mechanism. The lease mechanism is used as a LooksAlive validation between the
Cluster resource host and the SQL Server process. The mechanism is used to ensure that
the two sides (the Cluster Service and SQL Server service) are in frequent contact,
checking each other's state and ultimately preventing a split-brain scenario.
Network configuration
Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to route
traffic to your availability group listener.
On an Azure VM failover cluster, we recommend a single NIC per server (cluster node).
Azure networking has physical redundancy, which makes additional NICs unnecessary
on an Azure VM failover cluster. Although the cluster validation report issues a warning
that the nodes are only reachable on a single network, this warning can be safely
ignored on Azure VM failover clusters.
Deployment options
Tip
Eliminate the need for an Azure Load Balancer or distributed network name (DNN)
for your Always On availability group by creating your SQL Server VMs in multiple
subnets within the same Azure virtual network.
There are multiple options for deploying an availability group to SQL Server on Azure
VMs, some with more automation than others.
Next steps
To get started, review the HADR best practices, and then deploy your availability group
manually with the availability group tutorial.
Applies to:
SQL Server on Azure VM
This article introduces feature differences when you're working with failover cluster
instances (FCI) for SQL Server on Azure Virtual Machines (VMs).
Overview
SQL Server on Azure VMs uses Windows Server Failover Clustering (WSFC) functionality
to provide local high availability through redundancy at the server-instance level: a
failover cluster instance. An FCI is a single instance of SQL Server that's installed across
WSFC (or simply the cluster) nodes and, possibly, across multiple subnets. On the
network, an FCI appears to be a single instance of SQL Server running on a single
computer. But the FCI provides failover from one WSFC node to another if the current
node becomes unavailable.
The rest of the article focuses on the differences for failover cluster instances when
they're used with SQL Server on Azure VMs. To learn more about the failover clustering
technology, see:
7 Note
It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.
Quorum
Failover cluster instances with SQL Server on Azure Virtual Machines support using a
disk witness, a cloud witness, or a file share witness for cluster quorum.
To learn more, see Quorum best practices with SQL Server VMs in Azure.
Storage
In traditional on-premises clustered environments, a Windows failover cluster uses a
storage area network (SAN) that's accessible by both nodes as the shared storage. SQL
Server files are hosted on the shared storage, and only the active node can access the
files at one time.
SQL Server on Azure VMs offers various options as a shared storage solution for a
deployment of SQL Server failover cluster instances:
Supported Premium SSD LRS: Availability Sets with Availability sets Availability
VM or without proximity placement group
and availability sets
availability Premium SSD ZRS: Availability Zones
zones
Ultra disks: Same availability zone
The rest of this section lists the benefits and limitations of each storage option available
for SQL Server on Azure VMs.
Benefits:
Useful for applications looking to migrate to Azure while keeping their high-
availability and disaster recovery (HADR) architecture as is.
Can migrate clustered applications to Azure as is because of SCSI Persistent
Reservations (SCSI PR) support.
Supports shared Azure Premium SSD and Azure Ultra Disk storage.
Can use a single shared disk or stripe multiple shared disks to create a shared
storage pool.
Supports Filestream.
Premium SSDs support availability sets.
Premium SSDs Zone Redundant Storage (ZRS) supports Availability Zones. VMs
part of FCI can be placed in different availability zones.
7 Note
While Azure shared disks also support Standard SSD sizes, we do not recommend
using Standard SSDs for SQL Server workloads due to the performance limitations.
Limitations:
To get started, see SQL Server failover cluster instance with Azure shared disks.
Benefits:
To get started, see SQL Server failover cluster instance with Storage Spaces Direct.
Benefits:
Shared storage solution for virtual machines spread over multiple availability
zones.
Fully managed file system with single-digit latencies and burstable I/O
performance.
Limitations:
To get started, see SQL Server failover cluster instance with Premium file share.
Partner
There are partner clustering solutions with supported storage.
For example, NetApp Private Storage (NPS) exposes an iSCSI target via ExpressRoute
with Equinix to Azure VMs.
For shared storage and data replication solutions from Microsoft partners, contact the
vendor for any issues related to accessing data on failover.
Connectivity
To match the on-premises experience for connecting to your failover cluster instance,
deploy your SQL Server VMs to multiple subnets within the same virtual network.
Having multiple subnets negates the need for the extra dependency on an Azure Load
Balancer, or a distributed network name (DNN) to route your traffic to your FCI.
If you deploy your SQL Server VMs to a single subnet, you can configure a virtual
network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN)
to route traffic to your failover cluster instance. Review the differences between the two
and then deploy either a distributed network name or a virtual network name for your
failover cluster instance.
The distributed network name is recommended, if possible, as failover is faster, and the
overhead and cost of managing the load balancer is eliminated.
Most SQL Server features work transparently with FCIs when using the DNN, but there
are certain features that may require special consideration. See FCI and DNN
interoperability to learn more.
Limitations
Consider the following limitations for failover cluster instances with SQL Server on Azure
Virtual Machines.
If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister from
the extension by deleting the SQL virtual machine resource for the corresponding VMs
and then register it with the SQL IaaS Agent extension again. When you're deleting the
SQL virtual machine resource by using the Azure portal, clear the check box next to the
correct virtual machine to avoid deleting the virtual machine.
SQL Server FCIs registered with the extension do not support features that require the
agent, such as automated backup, patching, and advanced portal management. See the
table of benefits.
MSDTC
Azure Virtual Machines support Microsoft Distributed Transaction Coordinator (MSDTC)
on Windows Server 2019 with storage on Clustered Shared Volumes (CSV) and Azure
Standard Load Balancer or on SQL Server VMs that are using Azure shared disks.
On Azure Virtual Machines, MSDTC isn't supported for Windows Server 2016 or earlier
with Clustered Shared Volumes because:
Next steps
Review cluster configurations best practices, and then you can prepare your SQL Server
VM for FCI.
Applies to:
SQL Server on Azure VM
This article describes the differences when using the Windows Server Failover Cluster
feature with SQL Server on Azure VMs for high availability and disaster recovery (HADR),
such as for Always On availability groups (AG) or failover cluster instances (FCI).
To learn more about the Windows feature itself, see the Windows Server Failover Cluster
documentation.
Overview
SQL Server high availability solutions on Windows, such as Always On availability groups
(AG) or failover cluster instances (FCI) rely on the underlying Windows Server Failover
Clustering (WSFC) service.
The cluster service monitors network connections and the health of nodes in the cluster.
This monitoring is in addition to the health checks that SQL Server does as part of the
availability group or failover cluster instance feature. If the cluster service is unable to
reach the node, or if the AG or FCI role in the cluster becomes unhealthy, then the
cluster service initiates appropriate recovery actions to recover and bring applications
and services online, either on the same or on another node in the cluster.
Setting the threshold for declaring a failure is important in order to achieve a balance
between promptly responding to a failure, and avoiding false failures.
Monitoring Description
Monitoring Description
Aggressive Provides rapid failure detection and recovery of hard failures, which delivers the
highest levels of availability. The cluster service and SQL Server are both less
forgiving of transient failure and in some situations may prematurely fail over
resources when there are transient outages. Once failure is detected, the corrective
action that follows may take extra time.
Relaxed Provides more forgiving failure detection with a greater tolerance for brief transient
network issues. Avoids transient failures, but also introduces the risk of delaying
the detection of a true failure.
Aggressive settings in a cluster environment in the cloud may lead to premature failures
and longer outages, therefore a relaxed monitoring strategy is recommended for
failover clusters on Azure VMs. To adjust threshold settings, see cluster best practices for
more detail.
Cluster heartbeat
The primary settings that affect cluster heart beating and health detection between
nodes:
Setting Description
Delay This defines the frequency at which cluster heartbeats are sent between nodes. The
delay is the number of seconds before the next heartbeat is sent. Within the same
cluster there can be different delay settings configured between nodes on the same
subnet, and between nodes that are on different subnets.
Threshold The threshold is the number of heartbeats that can be missed before the cluster takes
recovery action. Within the same cluster there can be different threshold settings
configured between nodes on the same subnet, and between nodes that are on
different subnets.
The default values for these settings may be too low for cloud environments, and could
result in unnecessary failures due to transient network issues. To be more tolerant, use
relaxed threshold settings for failover clusters in Azure VMs. See cluster best practices
for more detail.
Quorum
Although a two-node cluster will function without a quorum resource, customers are
strictly required to use a quorum resource to have production support. Cluster
validation won't pass any cluster without a quorum resource.
Technically, a three-node cluster can survive a single node loss (down to two nodes)
without a quorum resource. But after the cluster is down to two nodes, there's a risk that
the clustered resources will go offline to prevent a split-brain scenario if a node is lost or
there's a communication failure between the nodes. Configuring a quorum resource will
allow the cluster resources to remain online with only one node online.
The disk witness is the most resilient quorum option, but to use a disk witness on a SQL
Server on Azure VM, you must use an Azure Shared Disk which imposes some
limitations to the high availability solution. As such, use a disk witness when you're
configuring your failover cluster instance with Azure Shared Disks, otherwise use a cloud
witness whenever possible.
The following table lists the quorum options available for SQL Server on Azure VMs:
The load balancer distributes inbound flows that arrive at the front end, and then routes
that traffic to the instances defined by the back-end pool. You configure traffic flow by
using load-balancing rules and health probes. With SQL Server FCI, the back-end pool
instances are the Azure virtual machines running SQL Server, and with availability
groups, the back-end pool is the listener. There is a slight failover delay when you're
using the load balancer, because the health probe conducts alive checks every 10
seconds by default.
To get started, learn how to configure Azure Load Balancer for a failover cluster instance
or an availability group.
Configuration of the VNN can be cumbersome, it's an additional source of failure, it can
cause a delay in failure detection, and there is an overhead and cost associated with
managing the additional resource. To address some of these limitations, SQL Server
introduced support for the Distributed Network Name feature.
Distributed network name (DNN)
To match the on-premises experience for connecting to your availability group listener
or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the
same virtual network. Having multiple subnets negates the need for the extra
dependency on a DNN to route traffic to your HADR solution. To learn more, see Multi-
subnet AG, and Multi-subnet FCI.
For SQL Server VMs deployed to a single subnet, the distributed network name feature
provides an alternative way for SQL Server clients to connect to the SQL Server failover
cluster instance or availability group listener without using a load balancer. The DNN
feature is available starting with SQL Server 2016 SP3 , SQL Server 2017 CU25 , SQL
Server 2019 CU8 , on Windows Server 2016 and later.
When a DNN resource is created, the cluster binds the DNS name with the IP addresses
of all the nodes in the cluster. The client will try to connect to each IP address in this list
to find which resource to connect to. You can accelerate this process by specifying
MultiSubnetFailover=True in the connection string. This setting tells the provider to try
all IP addresses in parallel, so the client can connect to the FCI or listener instantly.
The end-to-end solution is more robust since you no longer have to maintain the
load balancer resource.
Eliminating the load balancer probes minimizes failover duration.
The DNN simplifies provisioning and management of the failover cluster instance
or availability group listener with SQL Server on Azure VMs.
Most SQL Server features work transparently with FCI and availability groups when using
the DNN, but there are certain features that may require special consideration.
Supported SQL version: SQL Server 2019 CU2 (FCI) and SQL Server 2019 CU8 (AG)
To get started, learn to configure a distributed network name resource for a failover
cluster instance or an availability group.
There are additional considerations when using the DNN with other SQL Server features.
See FCI and DNN interoperability and AG and DNN interoperability to learn more.
Recovery actions
The cluster service takes corrective action when a failure is detected. This could restart
the resource on the existing node, or fail the resource over to another node. Once
corrective measures are initiated, they make take some time to complete.
For example, a restarted availability group comes online per the following sequence:
Since recovery could take some time, aggressive monitoring set to detect a failure in 20
seconds could result in an outage of minutes if a transient event occurs (such as
memory-preserving Azure VM maintenance). Setting the monitoring to a more relaxed
value of 40 seconds can help avoid a longer interruption of service.
To adjust threshold settings, see cluster best practices for more detail.
Node location
Nodes in a Windows cluster on virtual machines in Azure may be physically separated
within the same Azure region, or they can be in different regions. The distance may
introduce network latency, much like having cluster nodes spread between locations in
your own facilities would. In cloud environments, the difference is that within a region
you may not be aware of the distance between nodes. Moreover, some other factors like
physical and virtual components, number of hops, etc. can also contribute to increased
latency. If latency between the nodes is a concern, consider placing the nodes of the
cluster within a proximity placement group to guarantee network proximity.
Resource limits
When you configure an Azure VM, you determine the computing resources limits for the
CPU, memory, and IO. Workloads that require more resources than the purchased Azure
VM, or disk limits may cause VM performance issues. Performance degradation may
result in a failed health check for either the cluster service, or for the SQL Server high
availability feature. Resource bottlenecks may make the node or resource appear down
to the cluster or SQL Server.
Intensive SQL IO operations or maintenance operations such as backups, index, or
statistics maintenance could cause the VM or disk to reach IOPS or MBPS throughput
limits, which could make SQL Server unresponsive to an IsAlive/LooksAlive check.
If your SQL Server is experiencing unexpected failovers, check to make sure you are
following all performance best practices and monitor the server for disk or VM-level
capping.
Most platform updates don't affect customer VMs. When a no-impact update isn't
possible, Azure chooses the update mechanism that's least impactful to customer VMs.
Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certain
cases, Azure uses memory-preserving maintenance mechanisms. These mechanisms
pause the VM for up to 30 seconds and preserve the memory in RAM. The VM is then
resumed, and its clock is automatically synchronized.
A resource bottleneck during platform maintenance may make the AG or FCI appear
down to the cluster service. See the resource limits section of this article to learn more.
If you are using aggressive cluster monitoring, an extended VM pause may trigger a
failover. A failover will often cause more downtime than the maintenance pause, so it is
recommended to use relaxed monitoring to avoid triggering a failover while the VM is
paused for maintenance. See the cluster best practices for more information on setting
cluster thresholds in Azure VMs.
Limitations
Consider the following limitations when you're working with FCI or availability groups
and SQL Server on Azure Virtual Machines.
MSDTC
Azure Virtual Machines support Microsoft Distributed Transaction Coordinator (MSDTC)
on Windows Server 2019 with storage on Clustered Shared Volumes (CSV) and Azure
Standard Load Balancer or on SQL Server VMs that are using Azure shared disks.
On Azure Virtual Machines, MSDTC isn't supported for Windows Server 2016 or earlier
with Clustered Shared Volumes because:
Next steps
Now that you've familiarized yourself with the differences when using a Windows
Failover Cluster with SQL Server on Azure VMs, learn about the high availability features
availability groups or failover cluster instances. If you're ready to get started, be sure to
review the best practices for configuration recommendations.
Checklist: Best practices for SQL Server
on Azure VMs
Article • 03/29/2023
Applies to:
SQL Server on Azure VM
This article provides a quick checklist as a series of best practices and guidelines to
optimize performance of your SQL Server on Azure Virtual Machines (VMs).
For comprehensive details, see the other articles in this series: VM size, Storage, Security,
HADR configuration, Collect baseline.
Enable SQL Assessment for SQL Server on Azure VMs and your SQL Server will be
evaluated against known best practices with results on the SQL VM management page
of the Azure portal.
For videos about the latest features to optimize SQL Server VM performance and
automate management, review the following Data Exposed videos:
Overview
While running SQL Server on Azure Virtual Machines, continue using the same database
performance tuning options that are applicable to SQL Server in on-premises server
environments. However, the performance of a relational database in a public cloud
depends on many factors, such as the size of a virtual machine, and the configuration of
the data disks.
There's typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure Virtual Machines. If your workload is less
demanding, you might not require every recommended optimization. Consider your
performance needs, costs, and workload patterns as you evaluate these
recommendations.
VM size
The checklist in this section covers the VM size best practices for SQL Server on Azure
VMs.
Storage
The checklist in this section covers the storage best practices for SQL Server on Azure
VMs.
Security
The checklist in this section covers the security best practices for SQL Server on Azure
VMs.
SQL Server features and capabilities provide a method of security at the data level and is
how you achieve defense-in-depth at the infrastructure level for cloud-based and
hybrid solutions. In addition, with Azure security measures, it is possible to encrypt your
sensitive data, protect virtual machines from viruses and malware, secure network traffic,
identify and detect threats, meet compliance requirements, and provides a single
method for administration and reporting for any security need in the hybrid cloud.
Use Microsoft Defender for Cloud to evaluate and take action to improve the
security posture of your data environment. Capabilities such as Azure Advanced
Threat Protection (ATP) can be leveraged across your hybrid workloads to improve
security evaluation and give the ability to react to risks. Registering your SQL
Server VM with the SQL IaaS Agent extension surfaces Microsoft Defender for
Cloud assessments within the SQL virtual machine resource of the Azure portal.
Use Microsoft Defender for SQL to discover and mitigate potential database
vulnerabilities, as well as detect anomalous activities that could indicate a threat to
your SQL Server instance and database layer.
Vulnerability Assessment is a part of Microsoft Defender for SQL that can discover
and help remediate potential risks to your SQL Server environment. It provides
visibility into your security state, and includes actionable steps to resolve security
issues.
Use Azure confidential VMs to reinforce protection of your data in-use, and data-
at-rest against host operator access. Azure confidential VMs allow you to
confidently store your sensitive data in the cloud and meet strict compliance
requirements.
If you're on SQL Server 2022, consider using Azure Active Directory authentication
to connect to your instance of SQL Server.
Azure Advisor analyzes your resource configuration and usage telemetry and then
recommends solutions that can help you improve the cost effectiveness,
performance, high availability, and security of your Azure resources. Leverage
Azure Advisor at the virtual machine, resource group, or subscription level to help
identify and apply best practices to optimize your Azure deployments.
Use Azure Disk Encryption when your compliance and security needs require you
to encrypt the data end-to-end using your encryption keys, including encryption of
the ephemeral (locally attached temporary) disk.
Managed Disks are encrypted at rest by default using Azure Storage Service
Encryption, where the encryption keys are Microsoft-managed keys stored in
Azure.
For a comparison of the managed disk encryption options review the managed
disk encryption comparison chart
Management ports should be closed on your virtual machines - Open remote
management ports expose your VM to a high level of risk from internet-based
attacks. These attacks attempt to brute force credentials to gain admin access to
the machine.
Turn on Just-in-time (JIT) access for Azure virtual machines
Use Azure Bastion over Remote Desktop Protocol (RDP).
Lock down ports and only allow the necessary application traffic using Azure
Firewall which is a managed Firewall as a Service (FaaS) that grants/ denies server
access based on the originating IP address.
Use Network Security Groups (NSGs) to filter network traffic to, and from, Azure
resources on Azure Virtual Networks
Leverage Application Security Groups to group servers together with similar port
filtering requirements, with similar functions, such as web servers and database
servers.
For web and application servers leverage Azure Distributed Denial of Service
(DDoS) protection. DDoS attacks are designed to overwhelm and exhaust network
resources, making apps slow or unresponsive. It is common for DDos attacks to
target user interfaces. Azure DDoS protection sanitizes unwanted network traffic,
before it impacts service availability
Use VM extensions to help address anti-malware, desired state, threat detection,
prevention, and remediation to address threats at the operating system, machine,
and network levels:
Guest Configuration extension performs audit and configuration operations
inside virtual machines.
Network Watcher Agent virtual machine extension for Windows and Linux
monitors network performance, diagnostic, and analytics service that allows
monitoring of Azure networks.
Microsoft Antimalware Extension for Windows to help identify and remove
viruses, spyware, and other malicious software, with configurable alerts.
Evaluate 3rd party extensions such as Symantec Endpoint Protection for
Windows VM (/azure/virtual-machines/extensions/symantec)
Use Azure Policy to create business rules that can be applied to your environment.
Azure Policies evaluate Azure resources by comparing the properties of those
resources against rules defined in JSON format.
Azure Blueprints enables cloud architects and central information technology
groups to define a repeatable set of Azure resources that implements and adheres
to an organization's standards, patterns, and requirements. Azure Blueprints are
different than Azure Policies.
Azure features
The following is a quick checklist of best practices for Azure-specific guidance when
running your SQL Server on Azure VM:
Register with the SQL IaaS Agent Extension to unlock a number of feature benefits.
Leverage the best backup and restore strategy for your SQL Server workload.
Ensure Accelerated Networking is enabled on the virtual machine.
Leverage Microsoft Defender for Cloud to improve the overall security posture of
your virtual machine deployment.
Leverage Microsoft Defender for Cloud, integrated with Microsoft Defender for
Cloud , for specific SQL Server VM coverage including vulnerability assessments,
and just-in-time access, which reduces the attack service while allowing legitimate
users to access virtual machines when necessary. To learn more, see vulnerability
assessments, enable vulnerability assessments for SQL Server VMs and just-in-time
access.
Leverage Azure Advisor to address performance, cost, reliability, operational
excellence, and security recommendations.
Leverage Azure Monitor to collect, analyze, and act on telemetry data from your
SQL Server environment. This includes identifying infrastructure issues with VM
insights and monitoring data with Log Analytics for deeper diagnostics.
Enable Autoshutdown for development and test environments.
Implement a high availability and disaster recovery (HADR) solution that meets
your business continuity SLAs, see the HADR options options available for SQL
Server on Azure VMs.
Use the Azure portal (support + troubleshooting) to evaluate resource health and
history; submit new support requests when needed.
HADR configuration
The checklist in this section covers the HADR best practices for SQL Server on Azure
VMs.
High availability and disaster recovery (HADR) features, such as the Always On
availability group and the failover cluster instance rely on underlying Windows Server
Failover Cluster technology. Review the best practices for modifying your HADR settings
to better support the cloud environment.
Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to
route traffic to your HADR solution.
Change the cluster to less aggressive parameters to avoid unexpected outages
from transient network failures or Azure platform maintenance. To learn more, see
heartbeat and threshold settings. For Windows Server 2012 and later, use the
following recommended values:
SameSubnetDelay: 1 second
SameSubnetThreshold: 40 heartbeats
CrossSubnetDelay: 1 second
CrossSubnetThreshold: 40 heartbeats
Place your VMs in an availability set or different availability zones. To learn more,
see VM availability settings.
Use a single NIC per cluster node.
Configure cluster quorum voting to use 3 or more odd number of votes. Don't
assign votes to DR regions.
Carefully monitor resource limits to avoid unexpected restarts or failovers due to
resource constraints.
Ensure your OS, drivers, and SQL Server are at the latest builds.
Optimize performance for SQL Server on Azure VMs. Review the other sections
in this article to learn more.
Reduce or spread out workload to avoid resource limits.
Move to a VM or disk that his higher limits to avoid constraints.
For your SQL Server availability group or failover cluster instance, consider these best
practices:
To connect to your HADR solution using the distributed network name (DNN),
consider the following:
You must use a client driver that supports MultiSubnetFailover = True , and this
parameter must be in the connection string.
Use a unique DNN port in the connection string when connecting to the DNN
listener for an availability group.
Use a database mirroring connection string for a basic availability group to bypass
the need for a load balancer or DNN.
Validate the sector size of your VHDs before deploying your high availability
solution to avoid having misaligned I/Os. See KB3009974 to learn more.
If the SQL Server database engine, Always On availability group listener, or failover
cluster instance health probe are configured to use a port between 49,152 and
65,536 (the default dynamic port range for TCP/IP), add an exclusion for each port.
Doing so prevents other systems from being dynamically assigned the same port.
The following example creates an exclusion for port 59999:
store=persistent
Next steps
To learn more, see the other articles in this best practices series:
VM size
Storage
Security
HADR settings
Collect baseline
Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see the
Frequently Asked Questions.
VM size: Performance best practices for
SQL Server on Azure VMs
Article • 03/29/2023
Applies to:
SQL Server on Azure VM
This article provides VM size guidance a series of best practices and guidelines to
optimize performance for your SQL Server on Azure Virtual Machines (VMs).
There's typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure Virtual Machines. If your workload is less
demanding, you might not require every recommended optimization. Consider your
performance needs, costs, and workload patterns as you evaluate these
recommendations.
For comprehensive details, see the other articles in this series: Checklist, Storage,
Security, HADR configuration, Collect baseline.
Checklist
Review the following checklist for a brief overview of the VM size best practices that the
rest of the article covers in greater detail:
To compare the VM size checklist with the others, see the comprehensive Performance
best practices checklist.
Overview
When you're creating a SQL Server on Azure VM, carefully consider the type of workload
necessary. If you're migrating an existing environment, collect a performance baseline to
determine your SQL Server on Azure VM requirements. If this is a new VM, then create
your new SQL Server VM based on your vendor requirements.
If you're creating a new SQL Server VM with a new application built for the cloud, you
can easily size your SQL Server VM as your data and usage requirements evolve.
Start
the development environments with the lower-tier D-Series, B-Series, or Av2-series and
grow your environment over time.
Use the SQL Server VM marketplace images with the storage configuration in the portal.
This makes it easier to properly create the storage pools necessary to get the size, IOPS,
and throughput necessary for your workloads. It is important to choose SQL Server VMs
that support premium storage and premium storage caching. See the storage article to
learn more.
7 Note
The larger Ebdsv5-series sizes (48 vCPUs and larger) offer support for NVMe
enabled storage access. In order to take advantage of this high I/O performance,
you must deploy your virtual machine using NVMe. NVMe support for SQL Server
marketplace images will be coming soon, but for now you must self-install SQL
Server in order to take advantage of NVMe.
SQL Server data warehouse and mission critical environments will often need to scale
beyond the 8 memory-to-vCore ratio. For medium environments, you may want to
choose a 16 memory-to-vCore ratio, and a 32 memory-to-vCore ratio for larger data
warehouse environments.
SQL Server data warehouse environments often benefit from the parallel processing of
larger machines. For this reason, the M-series and the Mv2-series are good options for
larger data warehouse environments.
Use the vCPU and memory configuration from your source machine as a baseline for
migrating a current on-premises SQL Server database to SQL Server on Azure VMs. If
you have Software Assurance, take advantage of Azure Hybrid Benefit to bring your
licenses to Azure and save on SQL Server licensing costs.
Memory optimized
The memory optimized virtual machine sizes are a primary target for SQL Server VMs
and the recommended choice by Microsoft. The memory optimized virtual machines
offer stronger memory-to-CPU ratios and medium-to-large cache options.
Ebdsv5-series
The Ebdsv5-series is a new memory-optimized series of VMs that offer the highest
remote storage throughput available in Azure. These VMs have a memory-to-vCore
ratio of 8 which, together with the high I/O throughput, makes them ideal for SQL
Server workloads. The Ebdsv5-series VMs offer the best price-performance for SQL
Server workloads running on Azure virtual machines and we strongly recommend them
for most of your production SQL Server workloads.
Edsv5-series
The Edsv5-series is designed for memory-intensive applications and is ideal for SQL
Server workloads that don't require as high I/O throughput as the Ebdsv5 series offers.
These VMs have a large local storage SSD capacity, up to 672 GiB of RAM, and very high
local and remote storage throughput. There's a nearly consistent 8 GiB of memory per
vCore across most of these virtual machines, which is ideal for most SQL Server
workloads.
The largest virtual machine in this group is the Standard_E104ids_v5 that offers 104
vCores and 672 GiBs of memory. This virtual machine is notable because it's isolated
which means it's guaranteed to be the only virtual machine running on the host, and
therefore is isolated from other customer workloads. This has a memory-to-vCore ratio
that is lower than what is recommended for SQL Server, so it should only be used if
isolation is required.
The Edsv5-series virtual machines support premium storage, and premium storage
caching.
ECadsv5-series
The ECadsv5-series virtual machine sizes are memory-optimized Azure confidential
VMs with a temporary disk. Review confidential VMs for information about the security
benefits of Azure confidential VMs.
The Mv2-series has the highest vCore counts and memory and is recommended for
mission critical and data warehouse workloads. Mv2-series instances are memory
optimized VM sizes providing unparalleled computational performance to support large
in-memory databases and workloads with a high memory-to-CPU ratio that is perfect
for relational database servers, large caches, and in-memory analytics.
Some of the features of the M and Mv2-series attractive for SQL Server performance
include premium storage and premium storage caching support, ultra-disk support, and
write acceleration.
General Purpose
The General Purpose virtual machine sizes are designed to provide balanced memory-
to-vCore ratios for smaller entry level workloads such as development and test, web
servers, and smaller database servers.
Because of the smaller memory-to-vCore ratios with the General Purpose virtual
machines, it's important to carefully monitor memory-based performance counters to
ensure SQL Server is able to get the buffer cache memory it needs. See memory
performance baseline for more information.
Since the starting recommendation for production workloads is a memory-to-vCore
ratio of 8, the minimum recommended configuration for a General Purpose VM running
SQL Server is 4 vCPU and 32 GiB of memory.
Ddsv5 series
The Ddsv5-series offers a fair combination of vCPU, memory, and temporary disk but
with smaller memory-to-vCore support.
The Ddsv5 VMs include lower latency and higher-speed local storage.
These machines are ideal for side-by-side SQL and app deployments that require fast
access to temp storage and departmental relational databases. There's a standard
memory-to-vCore ratio of 4 across all of the virtual machines in this series.
For this reason, it's recommended to use the D8ds_v5 as the starter virtual machine in
this series, which has 8 vCores and 32 GiBs of memory. The largest machine is the
D96ds_v5, which has 96 vCores and 256 GiBs of memory.
The Ddsv5-series virtual machines support premium storage and premium storage
caching.
7 Note
DCadsv5-series
The DCadsv5-series virtual machine sizes are general purpose Azure confidential VMs
with temporary disk. Review confidential VMs for information about the security benefits
of Azure confidential VMs.
B-series
The burstable B-series virtual machine sizes are ideal for workloads that don't need
consistent performance such as proof of concept and very small application and
development servers.
Most of the burstable B-series virtual machine sizes have a memory-to-vCore ratio of 4.
The largest of these machines is the Standard_B20ms with 20 vCores and 80 GiB of
memory.
This series is unique as the apps have the ability to burst during business hours with
burstable credits varying based on machine size.
When the credits are exhausted, the VM returns to the baseline machine performance.
The benefit of the B-series is the compute savings you could achieve compared to the
other VM sizes in other series especially if you need the processing power sparingly
throughout the day.
This series supports premium storage, but does not support premium storage caching.
7 Note
The burstable B-series does not have the memory-to-vCore ratio of 8 that is
recommended for SQL Server workloads. As such, consider using these virtual
machines for smaller applications, web servers, and development workloads only.
Av2-series
The Av2-series VMs are best suited for entry-level workloads like development and test,
low traffic web servers, small to medium app databases, and proof-of-concepts.
These virtual machines are both good options for smaller development and test SQL
Server machines.
The 8 vCore Standard_A8m_v2 may also be a good option for small application and web
servers.
7 Note
The Av2 series does not support premium storage and as such, is not
recommended for production SQL Server workloads even with the virtual machines
that have a memory-to-vCore ratio of 8.
Storage optimized
The storage optimized VM sizes are for specific use cases. These virtual machines are
specifically designed with optimized disk throughput and IO.
Lsv2-series
The Lsv2-series features high throughput, low latency, and local NVMe storage. The
Lsv2-series VMs are optimized to use the local disk on the node attached directly to the
VM rather than using durable data disks.
These virtual machines are strong options for big data, data warehouse, reporting, and
ETL workloads. The high throughput and IOPS of the local NVMe storage is a good use
case for processing files that will be loaded into your database and other scenarios
where the data can be recreated from the source system or other repositories such as
Azure Blob storage or Azure Data Lake. Lsv2-series VMs can also burst their disk
performance for up to 30 minutes at a time.
These virtual machines size from 8 to 80 vCPU with 8 GiB of memory per vCPU and for
every 8 vCPUs there is 1.92 TB of NVMe SSD. This means for the largest VM of this
series, the L80s_v2, there is 80 vCPU and 640 BiB of memory with 10x1.92TB of NVMe
storage. There's a consistent memory-to-vCore ratio of 8 across all of these virtual
machines.
The NVMe storage is ephemeral meaning that data will be lost on these disks if you
deallocate your virtual machine, or if it's moved to a different host for service healing.
The Lsv2 and Ls series support premium storage, but not premium storage caching. The
creation of a local cache to increase IOPs is not supported.
2 Warning
Storing your data files on the ephemeral NVMe storage could result in data loss
when the VM is deallocated.
Constrained vCores
High performing SQL Server workloads often need larger amounts of memory, IOPS,
and throughput without the higher vCore counts.
Most OLTP workloads are application databases driven by large numbers of smaller
transactions. With OLTP workloads, only a small amount of the data is read or modified,
but the volumes of transactions driven by user counts are much higher. It is important
to have the SQL Server memory available to cache plans, store recently accessed data
for performance, and ensure physical reads can be read into memory quickly.
These OLTP environments need higher amounts of memory, fast storage, and the I/O
bandwidth necessary to perform optimally.
In order to maintain this level of performance without the higher SQL Server licensing
costs, Azure offers VM sizes with constrained vCPU counts.
This helps control licensing costs by reducing the available vCores while maintaining the
same memory, storage, and I/O bandwidth of the parent virtual machine.
The vCPU count can be constrained to one-half to one-quarter of the original VM size.
Reducing the vCores available to the virtual machine achieves higher memory-to-vCore
ratios, but the compute cost will remain the same.
These new VM sizes have a suffix that specifies the number of active vCPUs to make
them easier to identify.
For example, the M64-32ms requires licensing only 32 SQL Server vCores with the
memory, I/O, and throughput of the M64ms and the M64-16ms requires licensing only
16 vCores. Though while the M64-16ms has a quarter of the SQL Server licensing cost of
the M64ms, the compute cost of the virtual machines is the same.
7 Note
Next steps
To learn more, see the other articles in this best practices series:
Quick checklist
Storage
Security
HADR settings
Collect baseline
For security best practices, see Security considerations for SQL Server on Azure
Virtual Machines.
Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see
the Frequently Asked Questions.
Storage: Performance best practices for
SQL Server on Azure VMs
Article • 06/22/2023
Applies to:
SQL Server on Azure VM
This article provides storage best practices and guidelines to optimize performance for
your SQL Server on Azure Virtual Machines (VM).
There's typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure VMs. If your workload is less demanding, you
might not require every recommended optimization. Consider your performance needs,
costs, and workload patterns as you evaluate these recommendations.
To learn more, see the other articles in this series: Checklist, VM size, Security, HADR
configuration, and Collect baseline.
Checklist
Review the following checklist for a brief overview of the storage best practices that the
rest of the article covers in greater detail:
Overview
To find the most effective configuration for SQL Server workloads on an Azure VM, start
by measuring the storage performance of your business application. Once storage
requirements are known, select a virtual machine that supports the necessary IOPS and
throughput with the appropriate memory-to-vCore ratio.
Choose a VM size with enough storage scalability for your workload and a mixture of
disks (usually in a storage pool) that meet the capacity and performance requirements
of your business.
The type of disk depends on both the file type that's hosted on the disk and your peak
performance requirements.
Tip
Provisioning a SQL Server VM through the Azure portal helps guide you through
the storage configuration process and implements most storage best practices
such as creating separate storage pools for your data and log files, targeting
tempdb to the D:\ drive, and enabling the optimal caching policy. For more
VM disk types
You have a choice in the performance level for your disks. The types of managed disks
available as underlying storage (listed by increasing performance capabilities) are
Standard hard disk drives (HDD), Standard solid-state drives (SSD), Premium SSDs,
Premium SSD v2, and Ultra Disks.
For Standard HDDs, Standard SSDs, and Premium SSDs, the performance of the disk
increases with the size of the disk, grouped by premium disk labels such as the P1 with 4
GiB of space and 120 IOPS to the P80 with 32 TiB of storage and 20,000 IOPS. Premium
storage supports a storage cache that helps improve read and write performance for
some workloads. For more information, see Managed disks overview.
The performance of Premium SSD v2 and Ultra Disks can be changed independently of
the size of the disk, for details see Ultra disk performance and Premium SSD v2
performance.
There are also three main disk roles to consider for your SQL Server on Azure VM - an
OS disk, a temporary disk, and your data disks. Carefully choose what is stored on the
operating system drive (C:\) and the ephemeral temporary drive (D:\) .
For production SQL Server environments, don't use the operating system disk for data
files, log files, error logs.
Temporary disk
Many Azure VMs contain another disk type called the temporary disk (labeled as the
D:\ drive). Depending on the VM series and size the capacity of this disk will vary. The
temporary disk is ephemeral, which means the disk storage is recreated (as in, it's
deallocated and allocated again), when the VM is restarted, or moved to a different host
(for service healing, for example).
The temporary storage drive isn't persisted to remote storage and therefore shouldn't
store user database files, transaction log files, or anything that must be preserved.
Place tempdb on the local temporary SSD D:\ drive for SQL Server workloads unless
consumption of local cache is a concern. If you're using a VM that doesn't have a
temporary disk then it's recommended to place tempdb on its own isolated disk or
storage pool with caching set to read-only. To learn more, see tempdb data caching
policies.
Data disks
Data disks are remote storage disks that are often created in storage pools in order to
exceed the capacity and performance that any single disk could offer to the VM.
Attach the minimum number of disks that satisfies the IOPS, throughput, and capacity
requirements of your workload. Don't exceed the maximum number of data disks of the
smallest VM you plan to resize to.
Place data and log files on data disks provisioned to best suit performance
requirements.
Format your data disk to use 64-KB allocation unit size for all data files placed on a drive
other than the temporary D:\ drive (which has a default of 4 KB). SQL Server VMs
deployed through Azure Marketplace come with data disks formatted with allocation
unit size and interleave for the storage pool set to 64 KB.
7 Note
It's also possible to host your SQL Server database files directly on Azure Blob
storage or on SMB storage such as Azure premium file share, but we recommend
using Azure managed disks for the best performance, reliability, and feature
availability.
Premium SSD v2
You should use Premium SSD v2 disks when running SQL Server workloads in supported
regions, if the current limitations are suitable for your environment. Depending on your
configuration, Premium SSD v2 can be cheaper than Premium SSDs, while also providing
performance improvements. With Premium SSD v2, you can individually adjust your
throughput or IOPS independently from the size of your disk. Being able to individually
adjust performance options allows for this larger cost savings and allows you to script
changes to meet performance requirements during anticipated or known periods of
need. We recommend using Premium SSD v2 when using the Ebdsv5 VM series as it is a
more cost-effective solution for these high I/O throughput machines. Premium SSD v2
doesn't currently support host caching, so choosing a VM size with high uncached
throughput such as the Ebdsv5 series VMs is recommended.
Premium SSD v2 disks aren't currently supported by SQL Server gallery images, but they
can be used with SQL Server on Azure VMs when configured manually.
Premium SSD
Use Premium SSDs for data and log files for production SQL Server workloads. Premium
SSD IOPS and bandwidth vary based on the disk size and type.
For production workloads, use the P30 and/or P40 disks for SQL Server data files to
ensure caching support and use the P30 up to P80 for SQL Server transaction log files.
For the best total cost of ownership, start with P30s (5000 IOPS/200 MBPS) for data and
log files and only choose higher capacities when you need to control the VM disk count.
For dev/test or small systems you can choose to use sizes smaller than P30 as these do
support caching, but they don't offer reserved pricing.
For OLTP workloads, match the target IOPS per disk (or storage pool) with your
performance requirements using workloads at peak times and the Disk Reads/sec +
Disk Writes/sec performance counters. For data warehouse and reporting workloads,
match the target throughput using workloads at peak times and the Disk Read
Bytes/sec + Disk Write Bytes/sec .
Use Storage Spaces to achieve optimal performance, configure two pools, one for the
log file(s) and the other for the data files. If you aren't using disk striping, use two
premium SSD disks mapped to separate drives, where one drive contains the log file and
the other contains the data.
The provisioned IOPS and throughput per disk that is used as part of your storage pool.
The combined IOPS and throughput capabilities of the disks is the maximum capability
up to the throughput limits of the VM.
The best practice is to use the least number of disks possible while meeting the minimal
requirements for IOPS (and throughput) and capacity. However, the balance of price and
performance tends to be better with a large number of small disks rather than a small
number of large disks.
Changing the performance tier allows administrators to prepare for and meet higher
demand without relying on disk bursting.
Use the higher performance for as long as needed where billing is designed to meet the
storage performance tier. Upgrade the tier to match the performance requirements
without increasing the capacity. Return to the original tier when the extra performance is
no longer required.
This cost-effective and temporary expansion of performance is a strong use case for
targeted events such as shopping, performance testing, training events and other brief
windows where greater performance is needed only for a short term.
For more information, see Performance tiers for managed disks.
Ultra disk can be configured where capacity and IOPS can scale independently. With
ultra disk administrators can provision a disk with the capacity, IOPS, and throughput
requirements based on application needs.
Ultra disk isn't supported on all VM series and has other limitations such as region
availability, redundancy, and support for Azure Backup. To learn more, see Using Azure
ultra disks for a full list of limitations.
Caching
VMs that support premium storage caching can take advantage of an additional feature
called the Azure BlobCache or host caching to extend the IOPS and throughput
capabilities of a VM. VMs enabled for both premium storage and premium storage
caching have these two different storage bandwidth limits that can be used together to
improve storage performance.
The IOPS and MBps throughput without caching counts against a VM's uncached disk
throughput limits. The maximum cached limits provide another buffer for reads that
helps address growth and unexpected peaks.
Reads and writes to the Azure BlobCache (cached IOPS and throughput) don't count
against the uncached IOPS and throughput limits of the VM.
7 Note
Disk Caching is not supported for disks 4 TiB and larger (P50 and larger). If multiple
disks are attached to your VM, each disk that is smaller than 4 TiB will support
caching. For more information, see Disk caching.
Uncached throughput
The max uncached disk IOPS and throughput is the maximum remote storage limit that
the VM can handle. This limit is defined at the VM and isn't a limit of the underlying disk
storage. This limit applies only to I/O against data drives remotely attached to the VM,
not the local I/O against the temp drive ( D:\ drive) or the OS drive.
The amount of uncached IOPS and throughput that is available for a VM can be verified
in the documentation for your VM.
For example, the M-series documentation shows that the max uncached throughput for
the Standard_M8ms VM is 5000 IOPS and 125 MBps of uncached disk throughput.
Likewise, you can see that the Standard_M32ts supports 20,000 uncached disk IOPS and
500-MBps uncached disk throughput. This limit is governed at the VM level regardless
of the underlying premium disk storage.
The max cached and temp storage throughput limit governs the I/O against the local
temp drive ( D:\ drive) and the Azure BlobCache only if host caching is enabled.
When caching is enabled on premium storage, VMs can scale beyond the limitations of
the remote storage uncached VM IOPS and throughput limits.
Only certain VMs support both premium storage and premium storage caching (which
needs to be verified in the virtual machine documentation). For example, the M-series
documentation indicates that both premium storage, and premium storage caching is
supported:
The limits of the cache vary based on the VM size. For example, the Standard_M8ms VM
supports 10000 cached disk IOPS and 1000-MBps cached disk throughput with a total
cache size of 793 GiB. Similarly, the Standard_M32ts VM supports 40000 cached disk
IOPS and 400-MBps cached disk throughput with a total cache size of 3174 GiB.
You can manually enable host caching on an existing VM. Stop all application workloads
and the SQL Server services before any changes are made to your VM's caching policy.
Changing any of the VM cache settings results in the target disk being detached and
reattached after the settings are applied.
Data disk Enable Read-only caching for the disks hosting SQL Server data files.
Reads from cache will be faster than the uncached reads from the data disk.
Uncached IOPS and throughput plus Cached IOPS and throughput yield the total
possible performance available from the VM within the VMs limits, but actual
performance varies based on the workload's ability to use the cache (cache hit
ratio).
Transaction Set the caching policy to None for disks hosting the transaction log. There's no
log disk performance benefit to enabling caching for the Transaction log disk, and in fact
having either Read-only or Read/Write caching enabled on the log drive can
degrade performance of the writes against the drive and decrease the amount of
cache available for reads on the data drive.
tempdb If tempdb can't be placed on the ephemeral drive D:\ due to capacity reasons,
either resize the VM to get a larger ephemeral drive or place tempdb on a separate
data drive with Read-only caching configured.
The VM cache and ephemeral drive both use the local SSD, so keep this in mind
when sizing as tempdb I/O will count against the cached IOPS and throughput VM
limits when hosted on the ephemeral drive.
) Important
Changing the cache setting of an Azure disk detaches and reattaches the target
disk. When changing the cache setting for a disk that hosts SQL Server data, log, or
application files, be sure to stop the SQL Server service along with any other related
services to avoid data corruption.
Disk striping
Analyze the throughput and bandwidth required for your SQL data files to determine
the number of data disks, including the log file and tempdb . Throughput and bandwidth
limits vary by VM size. To learn more, see VM Size
Add more data disks and use disk striping for more throughput. For example, an
application that needs 12,000 IOPS and 180-MB/s throughput can use three striped P30
disks to deliver 15,000 IOPS and 600-MB/s throughput.
Disk capping
There are throughput limits at both the disk and VM level. The maximum IOPS limits per
VM and per disk differ and are independent of each other.
Applications that consume resources beyond these limits will be throttled (also known
as capped). Select a VM and disk size in a disk stripe that meets application
requirements and won't face capping limitations. To address capping, use caching, or
tune the application so that less throughput is required.
For example, an application that needs 12,000 IOPS and 180 MB/s can:
VMs configured to scale up during times of high utilization should provision storage
with enough IOPS and throughput to support the maximum VM size while keeping the
overall number of disks less than or equal to the maximum number supported by the
smallest VM SKU targeted to be used.
For more information on disk capping limitations and using caching to avoid capping,
see Disk IO capping.
7 Note
Some disk capping may still result in satisfactory performance to users; tune and
maintain workloads rather than resize to a larger VM to balance managing cost and
performance for the business.
Write Acceleration
Write Acceleration is a disk feature that is only available for the M-Series VMs. The
purpose of Write Acceleration is to improve the I/O latency of writes against Azure
Premium Storage when you need single digit I/O latency due to high volume mission
critical OLTP workloads or data warehouse environments.
Use Write Acceleration to improve write latency to the drive hosting the log files. Don't
use Write Acceleration for SQL Server data files.
Write Accelerator disks share the same IOPS limit as the VM. Attached disks can't exceed
the Write Accelerator IOPS limit for a VM.
The following table outlines the number of data disks and IOPS supported per VM:
There are several restrictions to using Write Acceleration. To learn more, see Restrictions
when using Write Accelerator.
If possible, use Write Acceleration over ultra disks for the transaction log disk. For VMs
that don't support Write Acceleration but require low latency to the transaction log, use
Azure ultra disks.
Monitor storage performance
To assess storage needs, and determine how well storage is performing, you need to
understand what to measure, and what those indicators mean.
IOPS (Input/Output per second) is the number of requests the application is making to
storage per second. Measure IOPS using Performance Monitor counters Disk Reads/sec
and Disk Writes/sec . OLTP (Online transaction processing) applications need to drive
higher IOPS in order to achieve optimal performance. Applications such as payment
processing systems, online shopping, and retail point-of-sale systems are all examples
of OLTP applications.
Throughput is the volume of data that is being sent to the underlying storage, often
measured by megabytes per second. Measure throughput with the Performance
Monitor counters Disk Read Bytes/sec and Disk Write Bytes/sec . Data warehousing is
optimized around maximizing throughput over IOPS. Applications such as data stores
for analysis, reporting, ETL workstreams, and other business intelligence targets are all
examples of data warehousing applications.
I/O unit sizes influence IOPS and throughput capabilities as smaller I/O sizes yield higher
IOPS and larger I/O sizes yield higher throughput. SQL Server chooses the optimal I/O
size automatically. For more information about, see Optimize IOPS, throughput, and
latency for your applications.
There are specific Azure Monitor metrics that are invaluable for discovering capping at
the VM and disk level as well as the consumption and the health of the AzureBlob cache.
To identify key counters to add to your monitoring solution and Azure portal dashboard,
see Storage utilization metrics.
7 Note
Azure Monitor doesn't currently offer disk-level metrics for the ephemeral temp
drive (D:\) . VM Cached IOPS Consumed Percentage and VM Cached Bandwidth
Consumed Percentage will reflect IOPS and throughput from both the ephemeral
temp drive (D:\) and host caching together.
Next steps
To learn more, see the other articles in this best practices series:
Quick checklist
VM size
Security
HADR settings
Collect baseline
For security best practices, see Security considerations for SQL Server on Azure
Virtual Machines.
For detailed testing of SQL Server performance on Azure VMs with TPC-E and
TPC_C benchmarks, refer to the blog Optimize OLTP performance .
Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see
the Frequently Asked Questions.
Security considerations for SQL Server
on Azure Virtual Machines
Article • 03/29/2023
Applies to:
SQL Server on Azure VM
This article includes overall security guidelines that help establish secure access to SQL
Server instances in an Azure virtual machine (VM).
Azure complies with several industry regulations and standards that can enable you to
build a compliant solution with SQL Server running in a virtual machine. For information
about regulatory compliance with Azure, see Azure Trust Center .
First review the security best practices for SQL Server and Azure VMs and then review
this article for the best practices that apply to SQL Server on Azure VMs specifically.
To learn more about SQL Server VM best practices, see the other articles in this series:
Checklist, VM size, HADR configuration, and Collect baseline.
Checklist
Review the following checklist in this section for a brief overview of the security best
practices that the rest of the article covers in greater detail.
SQL Server features and capabilities provide a method of security at the data level and is
how you achieve defense-in-depth at the infrastructure level for cloud-based and
hybrid solutions. In addition, with Azure security measures, it is possible to encrypt your
sensitive data, protect virtual machines from viruses and malware, secure network traffic,
identify and detect threats, meet compliance requirements, and provides a single
method for administration and reporting for any security need in the hybrid cloud.
Use Microsoft Defender for Cloud to evaluate and take action to improve the
security posture of your data environment. Capabilities such as Azure Advanced
Threat Protection (ATP) can be leveraged across your hybrid workloads to improve
security evaluation and give the ability to react to risks. Registering your SQL
Server VM with the SQL IaaS Agent extension surfaces Microsoft Defender for
Cloud assessments within the SQL virtual machine resource of the Azure portal.
Use Microsoft Defender for SQL to discover and mitigate potential database
vulnerabilities, as well as detect anomalous activities that could indicate a threat to
your SQL Server instance and database layer.
Vulnerability Assessment is a part of Microsoft Defender for SQL that can discover
and help remediate potential risks to your SQL Server environment. It provides
visibility into your security state, and includes actionable steps to resolve security
issues.
Use Azure confidential VMs to reinforce protection of your data in-use, and data-
at-rest against host operator access. Azure confidential VMs allow you to
confidently store your sensitive data in the cloud and meet strict compliance
requirements.
If you're on SQL Server 2022, consider using Azure Active Directory authentication
to connect to your instance of SQL Server.
Azure Advisor analyzes your resource configuration and usage telemetry and then
recommends solutions that can help you improve the cost effectiveness,
performance, high availability, and security of your Azure resources. Leverage
Azure Advisor at the virtual machine, resource group, or subscription level to help
identify and apply best practices to optimize your Azure deployments.
Use Azure Disk Encryption when your compliance and security needs require you
to encrypt the data end-to-end using your encryption keys, including encryption of
the ephemeral (locally attached temporary) disk.
Managed Disks are encrypted at rest by default using Azure Storage Service
Encryption, where the encryption keys are Microsoft-managed keys stored in
Azure.
For a comparison of the managed disk encryption options review the managed
disk encryption comparison chart
Management ports should be closed on your virtual machines - Open remote
management ports expose your VM to a high level of risk from internet-based
attacks. These attacks attempt to brute force credentials to gain admin access to
the machine.
Turn on Just-in-time (JIT) access for Azure virtual machines
Use Azure Bastion over Remote Desktop Protocol (RDP).
Lock down ports and only allow the necessary application traffic using Azure
Firewall which is a managed Firewall as a Service (FaaS) that grants/ denies server
access based on the originating IP address.
Use Network Security Groups (NSGs) to filter network traffic to, and from, Azure
resources on Azure Virtual Networks
Leverage Application Security Groups to group servers together with similar port
filtering requirements, with similar functions, such as web servers and database
servers.
For web and application servers leverage Azure Distributed Denial of Service
(DDoS) protection. DDoS attacks are designed to overwhelm and exhaust network
resources, making apps slow or unresponsive. It is common for DDos attacks to
target user interfaces. Azure DDoS protection sanitizes unwanted network traffic,
before it impacts service availability
Use VM extensions to help address anti-malware, desired state, threat detection,
prevention, and remediation to address threats at the operating system, machine,
and network levels:
Guest Configuration extension performs audit and configuration operations
inside virtual machines.
Network Watcher Agent virtual machine extension for Windows and Linux
monitors network performance, diagnostic, and analytics service that allows
monitoring of Azure networks.
Microsoft Antimalware Extension for Windows to help identify and remove
viruses, spyware, and other malicious software, with configurable alerts.
Evaluate 3rd party extensions such as Symantec Endpoint Protection for
Windows VM (/azure/virtual-machines/extensions/symantec)
Use Azure Policy to create business rules that can be applied to your environment.
Azure Policies evaluate Azure resources by comparing the properties of those
resources against rules defined in JSON format.
Azure Blueprints enables cloud architects and central information technology
groups to define a repeatable set of Azure resources that implements and adheres
to an organization's standards, patterns, and requirements. Azure Blueprints are
different than Azure Policies.
For more information about security best practices, see SQL Server security best
practices and Securing SQL Server.
Vulnerability Assessments can discover and help remediate potential risks to your
SQL Server environment. It provides visibility into your security state, and it
includes actionable steps to resolve security issues.
Use security score in Microsoft Defender for Cloud.
Review the list of the compute and data recommendations currently available, for
further details.
Registering your SQL Server VM with the SQL Server IaaS Agent Extension surfaces
Microsoft Defender for SQL recommendations to the SQL virtual machines
resource in the Azure portal.
Portal management
After you've registered your SQL Server VM with the SQL IaaS Agent extension, you can
configure a number of security settings using the SQL virtual machines resource in the
Azure portal, such as enabling Azure Key Vault integration, or SQL authentication.
Additionally, after you've enabled Microsoft Defender for SQL on machines you can view
Defender for Cloud features directly within the SQL virtual machines resource in the
Azure portal, such as vulnerability assessments and security alerts.
Confidential VMs
Azure confidential VMs provide a strong, hardware-enforced boundary that hardens the
protection of the guest OS against host operator access. Choosing a confidential VM
size for your SQL Server on Azure VM provides an extra layer of protection, enabling you
to confidently store your sensitive data in the cloud and meet strict compliance
requirements.
Azure confidential VMs leverage AMD processors with SEV-SNP technology that encrypt
the memory of the VM using keys generated by the processor. This helps protect data
while it's in use (the data that is processed inside the memory of the SQL Server process)
from unauthorized access from the host OS. The OS disk of a confidential VM can also
be encrypted with keys bound to the Trusted Platform Module (TPM) chip of the virtual
machine, reinforcing protection for data-at-rest.
For detailed deployment steps, see the Quickstart: Deploy SQL Server to a confidential
VM.
Recommendations for disk encryption are different for confidential VMs than for the
other VM sizes. See disk encryption to learn more.
Azure AD authentication
Starting with SQL Server 2022, you can connect to SQL Server using one of the following
Azure Active Directory (Azure AD) identity authentication methods:
Azure AD Password
Azure AD Integrated
Azure AD Universal with Multi-Factor Authentication
Azure Active Directory access token
To get started, review Configure Azure AD authentication for your SQL Server VM.
Azure Advisor
Azure Advisor is a personalized cloud consultant that helps you follow best practices to
optimize your Azure deployments. Azure Advisor analyzes your resource configuration
and usage telemetry and then recommends solutions that can help you improve the
cost effectiveness, performance, high availability, and security of your Azure resources.
Azure Advisor can evaluate at the virtual machine, resource group, or subscription level.
Access control
When you create a SQL Server virtual machine with an Azure gallery image, the SQL
Server Connectivity option gives you the choice of Local (inside VM), Private (within
Virtual Network), or Public (Internet).
For the best security, choose the most restrictive option for your scenario. For example,
if you are running an application that accesses SQL Server on the same VM, then Local is
the most secure choice. If you are running an Azure application that requires access to
the SQL Server, then Private secures communication to SQL Server only within the
specified Azure virtual network. If you require Public (internet) access to the SQL Server
VM, then make sure to follow other best practices in this topic to reduce your attack
surface area.
The selected options in the portal use inbound security rules on the VM's network
security group (NSG) to allow or deny network traffic to your virtual machine. You can
modify or create new inbound NSG rules to allow traffic to the SQL Server port (default
1433). You can also specify specific IP addresses that are allowed to communicate over
this port.
In addition to NSG rules to restrict network traffic, you can also use the Windows
Firewall on the virtual machine.
If you are using endpoints with the classic deployment model, remove any endpoints on
the virtual machine if you do not use them. For instructions on using ACLs with
endpoints, see Manage the ACL on an endpoint. This is not necessary for VMs that use
the Azure Resource Manager.
Consider enabling encrypted connections for the instance of the SQL Server Database
Engine in your Azure virtual machine. Configure SQL server instance with a signed
certificate. For more information, see Enable Encrypted Connections to the Database
Engine and Connection String Syntax.
Azure Firewall - A stateful, managed, Firewall as a Service (FaaS) that grants/ denies
server access based on originating IP address, to protect network resources.
Azure Distributed Denial of Service (DDoS) protection - DDoS attacks overwhelm
and exhaust network resources, making apps slow or unresponsive. Azure DDoS
protection sanitizes unwanted network traffic before it impacts service availability.
Network Security Groups (NSGs) - Filters network traffic to, and from, Azure
resources on Azure Virtual Networks
Application Security Groups - Provides for the grouping of servers with similar port
filtering requirements, and group together servers with similar functions, such as
web servers.
Disk encryption
This section provides guidance for disk encryption, but the recommendations vary
depending on if you're deploying a conventional SQL Server on Azure VM, or SQL
Server to an Azure confidential VM.
Conventional VMs
Managed disks deployed to VMs that are not Azure confidential VMs use server-side
encryption, and Azure Disk Encryption. Server-side encryption provides encryption-at-
rest and safeguards your data to meet your organizational security and compliance
commitments. Azure Disk Encryption uses either BitLocker or DM-Crypt technology, and
integrates with Azure Key Vault to encrypt both the OS and data disks.
Azure Disk Encryption - Encrypts virtual machine disks using Azure Disk Encryption
both for Windows and Linux virtual machines.
When your compliance and security requirements require you to encrypt the
data end-to-end using your encryption keys, including encryption of the
ephemeral (locally attached temporary) disk, use
Azure disk encryption.
Azure Disk Encryption (ADE) leverages the industry-standard BitLocker feature
of Windows and the DM-Crypt feature of Linux to
provide OS and data disk
encryption.
Managed Disk Encryption
Managed Disks are encrypted at rest by default using Azure Storage Service
Encryption where the encryption keys are Microsoft managed keys stored in
Azure.
Data in Azure managed disks is encrypted transparently using 256-bit AES
encryption, one of the strongest block ciphers available, and is FIPS 140-2
compliant.
For a comparison of the managed disk encryption options review the managed
disk encryption comparison chart.
Configure confidential OS disk encryption, which binds the OS disk encryption keys
to the Trusted Platform Module (TPM) chip of the virtual machine, and makes the
protected disk content accessible only to the VM.
Encrypt your data disks (any disks containing database files, log files, or backup
files) with BitLocker, and enable automatic unlocking - review manage-bde
autounlock or EnableBitLockerAutoUnlock for more information. Automatic
unlocking ensures the encryption keys are stored on the OS disk. In conjunction
with confidential OS disk encryption, this protects the data-at-rest stored to the
VM disks from unauthorized host access.
Trusted Launch
When you deploy a generation 2 virtual machine, you have the option to enable trusted
launch, which protects against advanced and persistent attack techniques.
Securely deploy virtual machines with verified boot loaders, OS kernels, and
drivers.
Securely protect keys, certificates, and secrets in the virtual machines.
Gain insights and confidence of the entire boot chain's integrity.
Ensure workloads are trusted and verifiable.
The following features are currently unsupported when you enable trusted launch for
your SQL Server on Azure VMs:
Manage accounts
You don't want attackers to easily guess account names or passwords. Use the following
tips to help:
Use complex strong passwords for all your accounts. For more information about
how to create a strong password, see Create a strong password article.
Create a SQL account with a unique name that has sysadmin membership. You
can do this from the portal by enabling SQL Authentication during
provisioning.
Tip
If you must use the SA login, enable the login after provisioning and assign a
new strong password.
7 Note
For other topics related to running SQL Server in Azure VMs, see SQL Server on Azure
Virtual Machines overview. If you have questions about SQL Server virtual machines, see
the Frequently Asked Questions.
To learn more, see the other articles in this best practices series:
Quick checklist
VM size
Storage
HADR settings
Collect baseline
HADR configuration best practices (SQL
Server on Azure VMs)
Article • 03/30/2023
Applies to:
SQL Server on Azure VM
A Windows Server Failover Cluster is used for high availability and disaster recovery
(HADR) with SQL Server on Azure Virtual Machines (VMs).
This article provides cluster configuration best practices for both failover cluster
instances (FCIs) and availability groups when you use them with SQL Server on Azure
VMs.
To learn more, see the other articles in this series: Checklist, VM size, Storage, Security,
HADR configuration, Collect baseline.
Checklist
Review the following checklist for a brief overview of the HADR best practices that the
rest of the article covers in greater detail.
High availability and disaster recovery (HADR) features, such as the Always On
availability group and the failover cluster instance rely on underlying Windows Server
Failover Cluster technology. Review the best practices for modifying your HADR settings
to better support the cloud environment.
Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to
route traffic to your HADR solution.
Change the cluster to less aggressive parameters to avoid unexpected outages
from transient network failures or Azure platform maintenance. To learn more, see
heartbeat and threshold settings. For Windows Server 2012 and later, use the
following recommended values:
SameSubnetDelay: 1 second
SameSubnetThreshold: 40 heartbeats
CrossSubnetDelay: 1 second
CrossSubnetThreshold: 40 heartbeats
Place your VMs in an availability set or different availability zones. To learn more,
see VM availability settings.
Use a single NIC per cluster node.
Configure cluster quorum voting to use 3 or more odd number of votes. Don't
assign votes to DR regions.
Carefully monitor resource limits to avoid unexpected restarts or failovers due to
resource constraints.
Ensure your OS, drivers, and SQL Server are at the latest builds.
Optimize performance for SQL Server on Azure VMs. Review the other sections
in this article to learn more.
Reduce or spread out workload to avoid resource limits.
Move to a VM or disk that his higher limits to avoid constraints.
For your SQL Server availability group or failover cluster instance, consider these best
practices:
To connect to your HADR solution using the distributed network name (DNN),
consider the following:
You must use a client driver that supports MultiSubnetFailover = True , and this
parameter must be in the connection string.
Use a unique DNN port in the connection string when connecting to the DNN
listener for an availability group.
Use a database mirroring connection string for a basic availability group to bypass
the need for a load balancer or DNN.
Validate the sector size of your VHDs before deploying your high availability
solution to avoid having misaligned I/Os. See KB3009974 to learn more.
If the SQL Server database engine, Always On availability group listener, or failover
cluster instance health probe are configured to use a port between 49,152 and
65,536 (the default dynamic port range for TCP/IP), add an exclusion for each port.
Doing so prevents other systems from being dynamically assigned the same port.
The following example creates an exclusion for port 59999:
To compare the HADR checklist with the other best practices, see the comprehensive
Performance best practices checklist.
VM availability settings
To reduce the impact of downtime, consider the following VM best availability settings:
Use proximity placement groups together with accelerated networking for lowest
latency.
Place virtual machine cluster nodes in separate availability zones to protect from
datacenter-level failures or in a single availability set for lower-latency redundancy
within the same datacenter.
Use premium-managed OS and data disks for VMs in an availability set.
Configure each application tier into separate availability sets.
Quorum
Although a two-node cluster will function without a quorum resource, customers are
strictly required to use a quorum resource to have production support. Cluster
validation won't pass any cluster without a quorum resource.
Technically, a three-node cluster can survive a single node loss (down to two nodes)
without a quorum resource, but after the cluster is down to two nodes, if there is
another node loss or communication failure, then there is a risk that the clustered
resources will go offline to prevent a split-brain scenario. Configuring a quorum
resource will allow the cluster to continue online with only one node online.
The disk witness is the most resilient quorum option, but to use a disk witness on a SQL
Server on Azure VM, you must use an Azure Shared Disk which imposes some
limitations to the high availability solution. As such, use a disk witness when you're
configuring your failover cluster instance with Azure Shared Disks, otherwise use a cloud
witness whenever possible.
The following table lists the quorum options available for SQL Server on Azure VMs:
The cloud witness is ideal for deployments in multiple sites, multiple zones, and
multiple regions. Use a cloud witness whenever possible, unless you're using a
shared-storage cluster solution.
The disk witness is the most resilient quorum option and is preferred for any
cluster that uses Azure Shared Disks (or any shared-disk solution like shared SCSI,
iSCSI, or fiber channel SAN). A Clustered Shared Volume cannot be used as a disk
witness.
The fileshare witness is suitable for when the disk witness and cloud witness are
unavailable options.
Quorum Voting
It's possible to change the quorum vote of a node participating in a Windows Server
Failover Cluster.
Start with each node having no vote by default. Each node should only have a vote with explicit
justification.
Enable votes for cluster nodes that host the primary replica of an availability group, or the
preferred owners of a failover cluster instance.
Enable votes for automatic failover owners. Each node that may host a primary replica or FCI as a
result of an automatic failover should have a vote.
If an availability group has more than one secondary replica, only enable votes for the replicas
that have automatic failover.
Qurom voting guidelines
Disable votes for nodes that are in secondary disaster recovery sites. Nodes in secondary sites
should not contribute to the decision of taking a cluster offline if there's nothing wrong with the
primary site.
Have an odd number of votes, with three quorum votes minimum. Add a quorum witness for an
additional vote if necessary in a two-node cluster.
Reassess vote assignments post-failover. You don't want to fail over into a cluster configuration
that doesn't support a healthy quorum.
Connectivity
To match the on-premises experience for connecting to your availability group listener
or failover cluster instance, deploy your SQL Server VMs to multiple subnets within the
same virtual network. Having multiple subnets negates the need for the extra
dependency on an Azure Load Balancer, or a distributed network name to route your
traffic to your listener.
To simplify your HADR solution, deploy your SQL Server VMs to multiple subnets
whenever possible. To learn more, see Multi-subnet AG, and Multi-subnet FCI.
If your SQL Server VMs are in a single subnet, it's possible to configure either a virtual
network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN)
for both failover cluster instances and availability group listeners.
The distributed network name is the recommended connectivity option, when available:
The end-to-end solution is more robust since you no longer have to maintain the
load balancer resource.
Eliminating the load balancer probes minimizes failover duration.
The DNN simplifies provisioning and management of the failover cluster instance
or availability group listener with SQL Server on Azure VMs.
Most SQL Server features work transparently with FCI and availability groups when using
the DNN, but there are certain features that may require special consideration. See FCI
and DNN interoperability and AG and DNN interoperability to learn more.
Tip
Set the MultiSubnetFailover parameter = true in the connection string even for
HADR solutions that span a single subnet to support future spanning of subnets
without needing to update connection strings.
Therefore, when running cluster nodes for SQL Server on Azure VM high availability
solutions, change the cluster settings to a more relaxed monitoring state to avoid
transient failures due to the increased possibility of network latency or failure, Azure
maintenance, or hitting resource bottlenecks.
The delay and threshold settings have a cumulative effect to total health detection. For
example, setting CrossSubnetDelay to send a heartbeat every 2 seconds and setting the
CrossSubnetThreshold to 10 missed heartbeats before taking recovery means the cluster
can have a total network tolerance of 20 seconds before recovery action is taken. In
general, continuing to send frequent heartbeats but having greater thresholds is
preferred.
To ensure recovery during legitimate outages while providing greater tolerance for
transient issues, relax your delay and threshold settings to the recommended values
detailed in the following table:
PowerShell
(get-cluster).SameSubnetThreshold = 40
(get-cluster).CrossSubnetThreshold = 40
PowerShell
get-cluster | fl *subnet*
This change is immediate, restarting the cluster or any resources is not required.
Same subnet values should not be greater than cross subnet values.
SameSubnetThreshold <= CrossSubnetThreshold
SameSubnetDelay <= CrossSubnetDelay
Choose relaxed values based on how much down time is tolerable and how long before
a corrective action should occur depending on your application, business needs, and
your environment. If you're not able to exceed the default Windows Server 2019 values,
then at least try to match them, if possible:
Relaxed monitoring
If tuning your cluster heartbeat and threshold settings as recommended is insufficient
tolerance and you're still seeing failures due to transient issues rather than true outages,
you can configure your AG or FCI monitoring to be more relaxed. In some scenarios, it
may be beneficial to temporarily relax the monitoring for a period of time given the
level of activity. For example, you may want to relax the monitoring when you're doing
IO intensive workloads such as database backups, index maintenance, DBCC CHECKDB,
etc. Once the activity is complete, set your monitoring to less relaxed values.
2 Warning
Changing these settings may mask an underlying problem, and should be used as a
temporary solution to reduce, rather than eliminate, the likelihood of failure.
Underlying issues should still be investigated and addressed.
Start by increasing the following parameters from their default values for relaxed
monitoring, and adjust as necessary:
Healthcheck 30000 60000 Determines health of the primary replica or node. The cluster
timeout resource DLL sp_server_diagnostics returns results at an
interval that equals 1/3 of the health-check timeout
threshold. If sp_server_diagnostics is slow or is not returning
information, the resource DLL will wait for the full interval of
the health-check timeout threshold before determining that
the resource is unresponsive, and initiating an automatic
failover, if configured to do so.
Use Transact-SQL (T-SQL) to modify the health check and failure conditions for both AGs
and FCIs.
SQL
ALTER AVAILABILITY GROUP AG1 SET (HEALTH_CHECK_TIMEOUT =60000);
SQL
Specific to availability groups, start with the following recommended parameters, and
adjust as necessary:
Session 10000 20000 Checks communication issues between replicas. The session-
timeout timeout period is a replica property that controls how long (in
seconds) that an availability replica waits for a ping response
from a connected replica before considering the connection to
have failed. By default, a replica waits 10 seconds for a ping
response. This replica property applies to only the connection
between a given secondary replica and the primary replica of
the availability group.
Lease timeout
Use the Failover Cluster Manager to modify the lease timeout settings for your
availability group. See the SQL Server availability group lease health check
documentation for detailed steps.
Session timeout
Use Transact-SQL (T-SQL) to modify the session timeout for an availability group:
SQL
Use the Failover Cluster Manager to modify the Max failures in specified period value:
Resource limits
VM or disk limits could result in a resource bottleneck that impacts the health of the
cluster, and impedes the health check. If you're experiencing issues with resource limits,
consider the following:
Ensure your OS, drivers, and SQL Server are at the latest builds.
Optimize SQL Server on Azure VM environment as described in the performance
guidelines for SQL Server on Azure Virtual Machines
Reduce or spread out the workload to reduce utilization without exceeding
resource limits
Tune the SQL Server workload if there is any opportunity, such as
Add/optimize indexes
Update statistics if needed and if possible, with Full scan
Use features like resource governor (starting with SQL Server 2014, enterprise
only) to limit resource utilization during specific workloads, such as backups or
index maintenance.
Move to a VM or disk that has higher limits to meet or exceed the demands of
your workload.
Networking
Deploy your SQL Server VMs to multiple subnets whenever possible to avoid the
dependency on an Azure Load Balancer or a distributed network name (DNN) to route
traffic to your HADR solution.
Use a single NIC per server (cluster node). Azure networking has physical redundancy,
which makes additional NICs unnecessary on an Azure virtual machine guest cluster. The
cluster validation report will warn you that the nodes are reachable only on a single
network. You can ignore this warning on Azure virtual machine guest failover clusters.
Bandwidth limits for a particular VM are shared across NICs and adding an additional
NIC does not improve availability group performance for SQL Server on Azure VMs. As
such, there is no need to add a second NIC.
The non-RFC-compliant DHCP service in Azure can cause the creation of certain failover
cluster configurations to fail. This failure happens because the cluster network name is
assigned a duplicate IP address, such as the same IP address as one of the cluster nodes.
This is an issue when you use availability groups, which depend on the Windows failover
cluster feature.
Consider the scenario when a two-node cluster is created and brought online:
1. The cluster comes online, and then NODE1 requests a dynamically assigned IP
address for the cluster network name.
2. The DHCP service doesn't give any IP address other than NODE1's own IP address,
because the DHCP service recognizes that the request comes from NODE1 itself.
3. Windows detects that a duplicate address is assigned both to NODE1 and to the
failover cluster's network name, and the default cluster group fails to come online.
4. The default cluster group moves to NODE2. NODE2 treats NODE1's IP address as
the cluster IP address and brings the default cluster group online.
5. When NODE2 tries to establish connectivity with NODE1, packets directed at
NODE1 never leave NODE2 because it resolves NODE1's IP address to itself.
NODE2 can't establish connectivity with NODE1, and then loses quorum and shuts
down the cluster.
6. NODE1 can send packets to NODE2, but NODE2 can't reply. NODE1 loses quorum
and shuts down the cluster.
You can avoid this scenario by assigning an unused static IP address to the cluster
network name in order to bring the cluster network name online and add the IP address
to Azure Load Balancer.
If the SQL Server database engine, Always On availability group listener, failover cluster
instance health probe, database mirroring endpoint, cluster core IP resource, or any
other SQL resource is configured to use a port between 49,152 and 65,536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Doing so will prevent
other system processes from being dynamically assigned the same port. The following
example creates an exclusion for port 59999:
store=persistent
It is important to configure the port exclusion when the port is not in use, otherwise the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions have been configured correctly, use the following
command: netsh int ipv4 show excludedportrange tcp .
Setting this exclusion for the availability group role IP probe port should prevent events
such as Event ID: 1069 with status 10048. This event can be seen in the Windows
Failover cluster events with the following message:
Cluster resource '<IP name in AG role>' of type 'IP Address' in cluster role
'<AG Name>' failed.
An Event ID: 1069 with status 10048 can be identified from cluster logs with
events like:
Status [**10048**](/windows/win32/winsock/windows-sockets-error-codes-2)
refers to: **This error occurs** if an application attempts to bind a socket
to an **IP address/port that has already been used** for an existing socket.
This can be caused by an internal process taking the same port defined as probe port.
Remember that probe port is used to check the status of a backend pool instance from
the Azure Load Balancer.
If the health probe fails to get a response from a backend
instance, then no new connections will be sent to that backend instance until the
health probe succeeds again.
Known issues
Review the resolutions for some commonly known issues and errors.
1. Navigate to your Virtual Machine in the Azure Portal - not the SQL virtual
machines.
3. Select Local time to specify the time range you're interested in, and the time zone,
either local to the VM, or UTC/GMT.
4. Select Add metric to add the following two metrics to see the graph:
The Azure Monitor activity log is a platform log in Azure that provides insight into
subscription-level events. The activity log includes information like when a resource is
modified or a virtual machine is started. You can view the activity log in the Azure portal
or retrieve entries with PowerShell and the Azure CLI.
3. Select Timespan and then choose the time frame when your availability group
failed over. Select Apply.
Error 1135
Cluster node 'Node1' was removed from the active failover cluster
membership.
The Cluster service on this node may have stopped. This could also be due to
the node having
lost communication with other active nodes in the failover cluster. Run the
Validate a
failures in any other network components to which the node is connected such
as hubs, switches, or bridges.
For more information, review Troubleshooting cluster issue with Event ID 1135.
Error 19407: The lease between availability group 'PRODAG' and the Windows
Server Failover Cluster has expired.
A connectivity issue occurred between the instance of SQL Server and the
Windows Server Failover Cluster.
To determine whether the availability group is failing over correctly, check
the corresponding availability group
Error 19419: The renewal of the lease between availability group '%.*ls' and
the Windows Server Failover Cluster
Connection timeout
If the session timeout is too aggressive for your availability group environment, you
may see following messages frequently:
or the endpoint address provided for the replica is not the database
mirroring endpoint of the host server instance.
Error 35206
Storage option can trigger the disk reset or surprise removed event. The storage system
Throttling can also trigger the disk surprise remove event.
If you are on Windows Server 2019 and you do not see a Windows Cluster IP, you have
configured Distributed Network Name, which is only supported on SQL Server 2019. If
you have previous versions of SQL Server, you can remove and Recreate the Cluster
using Network Name.
Review other Windows Failover Clustering Events Errors and their Solutions here
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
7 Note
Azure has two different deployment models for creating and working with
resources: Resource Manager and classic. This article covers using both models,
but Microsoft recommends that most new deployments use the Resource Manager
model.
Summary:
Determining which application pattern or patterns to use for your SQL Server-based
applications in an Azure environment is an important design decision and it requires a
solid understanding of how SQL Server and each infrastructure component of Azure
work together. With SQL Server in Azure Infrastructure Services, you can easily migrate,
maintain, and monitor your existing SQL Server applications built on Windows Server to
virtual machines (VMs) in Azure.
The goal of this article is to provide solution architects and developers a foundation for
good application architecture and design, which they can follow when migrating
existing applications to Azure as well as developing new applications in Azure.
For each application pattern, you will find an on-premises scenario, its respective cloud-
enabled solution, and the related technical recommendations. In addition, the article
discusses Azure-specific development strategies so that you can design your
applications correctly. Due to the many possible application patterns, it's recommended
that architects and developers should choose the most appropriate pattern for their
applications and users.
A typical n-tier application includes the presentation tier, the business tier, and the data
tier:
Tier Description
Presentation The presentation tier (web tier, front-end tier) is the layer in which users interact
with an application.
Business The business tier (middle tier) is the layer that the presentation tier and the data
tier use to communicate with each other and includes the core functionality of the
system.
Data The data tier is basically the server that stores an application's data (for example, a
server running SQL Server).
Application layers describe the logical groupings of the functionality and components in
an application; whereas tiers describe the physical distribution of the functionality and
components on separate physical servers, computers, networks, or remote locations. The
layers of an application may reside on the same physical computer (the same tier) or
may be distributed over separate computers (n-tier), and the components in each layer
communicate with components in other layers through well-defined interfaces. You can
think of the term tier as referring to physical distribution patterns such as two-tier,
three-tier, and n-tier. A 2-tier application pattern contains two application tiers:
application server and database server. The direct communication happens between the
application server and the database server. The application server contains both web-
tier and business-tier components. In 3-tier application pattern, there are three
application tiers: web server, application server, which contains the business logic tier
and/or business tier data access components, and the database server. The
communication between the web server and the database server happens over the
application server. For detailed information on application layers and tiers, see Microsoft
Application Architecture Guide.
Before you start reading this article, you should have knowledge on the fundamental
concepts of SQL Server and Azure. For information, see SQL Server Books Online, SQL
Server on Azure Virtual Machines and Azure.com .
This article describes several application patterns that can be suitable for your simple
applications as well as the highly complex enterprise applications. Before detailing each
pattern, we recommend that you should familiarize yourself with the available data
storage services in Azure, such as Azure Storage, Azure SQL Database, and SQL Server in
an Azure virtual machine. To make the best design decisions for your applications,
understand when to use which data storage service clearly.
You need a full compatibility with SQL Server and want to move existing
applications to Azure as-is.
You want to leverage the capabilities of the Azure environment but Azure SQL
Database does not support all the features that your application requires. This
could include the following areas:
Database size: At the time this article was updated, SQL Database supports a
database of up to 1 TB of data. If your application requires more than 1 TB of
data and you don't want to implement custom sharding solutions, it's
recommended that you use SQL Server in an Azure virtual machine. For the
latest information, see Scaling Out Azure SQL Database, DTU-Based Purchasing
Model, and vCore-Based Purchasing Model(preview).
HIPAA compliance: Healthcare customers and Independent Software Vendors
(ISVs) might choose SQL Server on Azure Virtual Machines instead of Azure SQL
Database because SQL Server on Azure Virtual Machines is covered by HIPAA
Business Associate Agreement (BAA). For information on compliance, see
Microsoft Azure Trust Center: Compliance .
Instance-level features: At this time, SQL Database doesn't support features
that live outside of the database (such as Linked Servers, Agent jobs, FileStream,
Service Broker, etc.). For more information, see Azure SQL Database Guidelines
and Limitations.
You want to perform a simple migration to Azure platform to evaluate whether the
platform answers your application's requirements or not.
You want to keep all the application tiers hosted in the same virtual machine in the
same Azure data center to reduce the latency between tiers.
You want to quickly provision development and test environments for short
periods of time.
You want to perform stress testing for varying workload levels but at the same time
you do not want to own and maintain many physical machines all the time.
The following diagram demonstrates a simple on-premises scenario and how you can
deploy its cloud enabled solution in a single virtual machine in Azure.
Deploying the business layer (business logic and data access components) on the same
physical tier as the presentation layer can maximize application performance, unless you
must use a separate tier due to scalability or security concerns.
Since this is a very common pattern to start with, you might find the following article on
migration useful for moving your data to your SQL Server VM: Migration guide: SQL
Server to SQL Server on Azure Virtual Machines.
The following diagram demonstrates how you can place a simple 3-tier application in
Azure by placing each application tier in a different virtual machine.
In this application pattern, there is only one virtual machine in each tier. If you have
multiple VMs in Azure, we recommend that you set up a virtual network. Azure Virtual
Network creates a trusted security boundary and also allows VMs to communicate
among themselves over the private IP address. In addition, always make sure that all
Internet connections only go to the presentation tier. When following this application
pattern, manage the network security group rules to control access. For more
information, see Allow external access to your VM using the Azure portal.
7 Note
Setting up a virtual network in Azure is free of charge. However, you are charged
for the VPN gateway that connects to on-premises. This charge is based on the
amount of time that connection is provisioned and available.
The following diagram demonstrates how you can place the application tiers in multiple
virtual machines in Azure by scaling out the presentation tier due to increased volume
of incoming client requests. As seen in the diagram, Azure Load Balancer is responsible
for distributing traffic across multiple virtual machines and also determining which web
server to connect to. Having multiple instances of the web servers behind a load
balancer ensures the high availability of the presentation tier.
Best practices for 2-tier, 3-tier, or n-tier patterns that have
multiple VMs in one tier
It's recommended that you place the virtual machines that belong to the same tier in
the same cloud service and in the same the availability set. For example, place a set of
web servers in CloudService1 and AvailabilitySet1 and a set of database servers in
CloudService2 and AvailabilitySet2. An availability set in Azure enables you to place the
high availability nodes into separate fault domains and upgrade domains.
To leverage multiple VM instances of a tier, you need to configure Azure Load Balancer
between application tiers. To configure Load Balancer in each tier, create a load-
balanced endpoint on each tier's VMs separately. For a specific tier, first create VMs in
the same cloud service. This ensures that they have the same public Virtual IP address.
Next, create an endpoint on one of the virtual machines on that tier. Then, assign the
same endpoint to the other virtual machines on that tier for load balancing. By creating
a load-balanced set, you distribute traffic across multiple virtual machines and also allow
the Load Balancer to determine which node to connect when a backend VM node fails.
For example, having multiple instances of the web servers behind a load balancer
ensures the high availability of the presentation tier.
As a best practice, always make sure that all internet connections first go to the
presentation tier. The presentation layer accesses the business tier, and then the
business tier accesses the data tier. For more information on how to allow access to the
presentation layer, see Allow external access to your VM using the Azure portal.
Note that the Load Balancer in Azure works similar to load balancers in an on-premises
environment. For more information, see Load balancing for Azure infrastructure services.
In addition, we recommend that you set up a private network for your virtual machines
by using Azure Virtual Network. This allows them to communicate among themselves
over the private IP address. For more information, see Azure Virtual Network.
The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you place the application tiers in multiple virtual machines in
Azure by scaling out the business tier, which contains the business logic tier and data
access components. As seen in the diagram, Azure Load Balancer is responsible for
distributing traffic across multiple virtual machines and also determining which web
server to connect to. Having multiple instances of the application servers behind a load
balancer ensures the high availability of the business tier. For more information, see Best
practices for 2-tier, 3-tier, or n-tier application patterns that have multiple virtual
machines in one tier.
The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you scale out the presentation tier and the business tier
components in multiple virtual machines in Azure. In addition, you implement high
availability and disaster recovery (HADR) techniques for SQL Server databases in Azure.
Running multiple copies of an application in different VMs make sure that you are load
balancing requests across them. When you have multiple virtual machines, you need to
make sure that all your VMs are accessible and running at one point in time. If you
configure load balancing, Azure Load Balancer tracks the health of VMs and directs
incoming calls to the healthy functioning VM nodes properly. For information on how to
set up load balancing of the virtual machines, see Load balancing for Azure
infrastructure services. Having multiple instances of web and application servers behind
a load balancer ensures the high availability of the presentation and business tiers.
Best practices for application patterns requiring SQL
HADR
When you set up SQL Server high availability and disaster recovery solutions in Azure
Virtual Machines, setting up a virtual network for your virtual machines using Azure
Virtual Network is mandatory. Virtual machines within a Virtual Network will have a
stable private IP address even after a service downtime, thus you can avoid the update
time required for DNS name resolution. In addition, the virtual network allows you to
extend your on-premises network to Azure and creates a trusted security boundary. For
example, if your application has corporate domain restrictions (such as, Windows
authentication, Active Directory), setting up Azure Virtual Network is necessary.
Most of customers, who are running production code on Azure, are keeping both
primary and secondary replicas in Azure.
For comprehensive information and tutorials on high availability and disaster recovery
techniques, see High Availability and Disaster Recovery for SQL Server on Azure Virtual
Machines.
2-tier and 3-tier using Azure Virtual Machines
and Cloud Services
In this application pattern, you deploy 2-tier or 3-tier application to Azure by using both
Azure Cloud Services (web and worker roles - Platform as a Service (PaaS)) and Azure
Virtual Machines (Infrastructure as a Service (IaaS)). Using Azure Cloud Services for the
presentation tier/business tier and SQL Server in Azure Virtual Machines for the data tier
is beneficial for most applications running on Azure. The reason is that having a
compute instance running on Cloud Services provides an easy management,
deployment, monitoring, and scale-out.
With Cloud Services, Azure maintains the infrastructure for you, performs routine
maintenance, patches the operating systems, and attempts to recover from service and
hardware failures. When your application needs scale-out, automatic, and manual scale-
out options are available for your cloud service project by increasing or decreasing the
number of instances or virtual machines that are used by your application. In addition,
you can use on-premises Visual Studio to deploy your application to a cloud service
project in Azure.
In summary, if you don't want to own extensive administrative tasks for the
presentation/business tier and your application does not require any complex
configuration of software or the operating system, use Azure Cloud Services. If Azure
SQL Database does not support all the features you are looking for, use SQL Server in an
Azure virtual machine for the data tier. Running an application on Azure Cloud Services
and storing data in Azure Virtual Machines combines the benefits of both services. For a
detailed comparison, see the section in this topic on Comparing development strategies
in Azure.
In this application pattern, the presentation tier includes a web role, which is a Cloud
Services component running in the Azure execution environment and it is customized
for web application programming as supported by IIS and ASP.NET. The business or
backend tier includes a worker role, which is a Cloud Services component running in the
Azure execution environment and it is useful for generalized development, and may
perform background processing for a web role. The database tier resides in a SQL Server
virtual machine in Azure. The communication between the presentation tier and the
database tier happens directly or over the business tier – worker role components.
The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you place the presentation tier in web roles, the business tier in
worker roles but the data tier in virtual machines in Azure. Running multiple copies of
the presentation tier in different web roles ensures to load balance requests across
them. When you combine Azure Cloud Services with Azure Virtual Machines, we
recommend that you set up Azure Virtual Network as well. With Azure Virtual Network,
you can have stable and persistent private IP addresses within the same cloud service in
the cloud. Once you define a virtual network for your virtual machines and cloud
services, they can start communicating among themselves over the private IP address. In
addition, having virtual machines and Azure web/worker roles in the same Azure Virtual
Network provides low latency and more secure connectivity. For more information, see
What is a cloud service.
As seen in the diagram, Azure Load Balancer distributes traffic across multiple virtual
machines and also determines which web server or application server to connect to.
Having multiple instances of the web and application servers behind a load balancer
ensures the high availability of the presentation tier and the business tier. For more
information, see Best practices for application patterns requiring SQL HADR.
Another approach to implement this application pattern is to use a consolidated web
role that contains both presentation tier and business tier components as shown in the
following diagram. This application pattern is useful for applications that require stateful
design. Since Azure provides stateless compute nodes on web and worker roles, we
recommend that you implement a logic to store session state using one of the following
technologies: Azure Caching, Azure Table Storage or Azure SQL Database.
Pattern with Azure Virtual Machines, Azure SQL
Database, and Azure App Service (Web Apps)
The primary goal of this application pattern is to show you how to combine Azure
infrastructure as a service (IaaS) components with Azure platform-as-a-service
components (PaaS) in your solution. This pattern is focused on Azure SQL Database for
relational data storage. It does not include SQL Server in an Azure virtual machine, which
is part of the Azure infrastructure as a service offering.
In this application pattern, you deploy a database application to Azure by placing the
presentation and business tiers in the same virtual machine and accessing a database in
Azure SQL Database (SQL Database) servers. You can implement the presentation tier by
using traditional IIS-based web solutions. Or, you can implement a combined
presentation and business tier by using Azure App Service.
You already have an existing SQL Database server configured in Azure and you
want to test your application quickly.
You want to test the capabilities of Azure environment.
You want to quickly provision development and test environments for short
periods of time.
Your business logic and data access components can be self-contained within a
web application.
The following diagram demonstrates an on-premises scenario and its cloud enabled
solution. In this scenario, you place the application tiers in a single virtual machine in
Azure and access data in Azure SQL Database.
If you choose to implement a combined web and application tier by using Azure Web
Apps, we recommend that you keep the middle-tier or application tier as dynamic-link
libraries (DLLs) in the context of a web application.
You want to build applications that run partly in the cloud and partly on-premises.
You want to migrate some or all elements of an existing on-premises application to
the cloud.
You want to move enterprise applications from on-premises virtualized platforms
to Azure.
You want to own an infrastructure environment that can scale up and down on
demand.
You want to quickly provision development and test environments for short
periods of time.
You want a cost effective way to take backups for enterprise database applications.
The following diagram demonstrates an n-tier hybrid application pattern that spans
across on-premises and Azure. As shown in the diagram, on-premises infrastructure
includes Active Directory Domain Services domain controller to support user
authentication and authorization. Note that the diagram demonstrates a scenario, where
some parts of the data tier live in an on-premises data center whereas some parts of the
data tier live in Azure. Depending on your application's needs, you can implement
several other hybrid scenarios. For example, you might keep the presentation tier and
the business tier in an on-premises environment but the data tier in Azure.
In Azure, you can use Active Directory as a standalone cloud directory for your
organization, or you can also integrate existing on-premises Active Directory with Azure
Active Directory. As seen in the diagram, the business tier components can access to
multiple data sources, such as to SQL Server in Azure via a private internal IP address, to
on-premises SQL Server via Azure Virtual Network, or to SQL Database using the .NET
Framework data provider technologies. In this diagram, Azure SQL Database is an
optional data storage service.
In n-tier hybrid application pattern, you can implement the following workflow in the
order specified:
2. Plan the resources and configuration needed in the Azure platform, such as
storage accounts and virtual machines.
b. Establish a connection between on-premises and Azure via Azure Virtual Private
network (VPN) tunnel. This method allows you to extend domain policies to a
virtual machine in Azure. In addition, you can set up firewall rules and use
Windows authentication in your virtual machine. Currently, Azure supports
secure site-to-site VPN and point-to-site VPN connections:
4. Set up scheduled jobs and alerts that back up on-premises data in a virtual
machine disk in Azure. For more information, see SQL Server Backup and Restore
with Azure Blob Storage and Backup and Restore for SQL Server on Azure Virtual
Machines.
5. Depending on your application's needs, you can implement one of the following
three common scenarios:
a. You can keep your web server, application server, and insensitive data in a
database server in Azure whereas you keep the sensitive data on-premises.
b. You can keep your web server and application server on-premises whereas the
database server in a virtual machine in Azure.
c. You can keep your database server, web server, and application server on-
premises whereas you keep the database replicas in virtual machines in Azure.
This setting allows the on-premises web servers or reporting applications to
access the database replicas in Azure. Therefore, you can achieve to lower the
workload in an on-premises database. We recommend that you implement this
scenario for heavy read workloads and developmental purposes. For
information on creating database replicas in Azure, see Always On Availability
Groups at High Availability and Disaster Recovery for SQL Server on Azure
Virtual Machines.
Set up a traditional web server (IIS - Internet Information Services) in Azure and
access databases in SQL Server on Azure Virtual Machines.
Implement and deploy a cloud service to Azure. Then, make sure that this cloud
service can access databases in SQL Server on Azure Virtual Machines. A cloud
service can include multiple web and worker roles.
The following table provides a comparison of traditional web development with Azure
Cloud Services and Azure Web Apps with respect to SQL Server on Azure Virtual
Machines. The table includes Azure Web Apps as it is possible to use SQL Server in an
Azure VM as a data source for Azure Web Apps via its public virtual IP address or DNS
name.
Development Visual Studio, WebMatrix, Visual Visual Studio, Visual Studio, WebMatrix,
and Web Developer, WebDeploy, Azure SDK, TFS, Visual Web Developer,
deployment FTP, TFS, IIS Manager, PowerShell. FTP, GIT, BitBucket,
PowerShell. Each cloud CodePlex, DropBox,
service has two GitHub, Mercurial, TFS,
environments Web Deploy, PowerShell.
to which you
can deploy your
service package
and
configuration:
staging and
production. You
can deploy a
cloud service to
the staging
environment to
test it before
you promote it
to production.
Administration You are responsible for You are You are responsible for
and setup administrative tasks on the responsible for administrative tasks on
application, data, firewall rules, administrative the application and data
virtual network, and operating tasks on the only.
system. application,
data, firewall
rules, and
virtual network.
Availability and
Disaster SQL Server Always On
Recovery for Availability Groups: You
SQL Server on can set up Always On
Azure Virtual Availability Groups when
Machines.
using Azure Web Apps
with SQL Server VMs in
SQL Server Azure. But you need to
Database configure Always On
Mirroring: Use Availability Group
with Azure Listener to route the
Cloud Services communication to the
(web/worker
Traditional web development Cloud services Web hosting with Azure
in Azure Virtual Machines in Azure Web Apps
Cross- You can use Azure Virtual You can use Azure Virtual Network is
premises Network to connect to on- Azure Virtual supported.
connectivity premises. Network to
connect to on-
premises.
Next steps
For more information on running SQL Server on Azure Virtual Machines, see SQL Server
on Azure Virtual Machines Overview.
Collect baseline: Performance best
practices for SQL Server on Azure VM
Article • 12/16/2022
Applies to:
SQL Server on Azure VM
There is typically a trade-off between optimizing for costs and optimizing for
performance. This performance best practices series is focused on getting the best
performance for SQL Server on Azure Virtual Machines. If your workload is less
demanding, you might not require every recommended optimization. Consider your
performance needs, costs, and workload patterns as you evaluate these
recommendations.
Overview
For a prescriptive approach, gather performance counters using PerfMon/LogMan and
capture SQL Server wait statistics to better understand general pressures and potential
bottlenecks of the source environment.
Start by collecting the CPU, memory, IOPS, throughput, and latency of the source
workload at peak times following the application performance checklist.
Gather data during peak hours such as workloads during your typical business day, but
also other high load processes such as end-of-day processing, and weekend ETL
workloads. Consider scaling up your resources for atypically heavily workloads, such as
end-of-quarter processing, and then scale done once the workload completes.
Use the performance analysis to select the VM Size that can scale to your workload's
performance requirements.
Storage
SQL Server performance depends heavily on the I/O subsystem and storage
performance is measured by IOPS and throughput. Unless your database fits into
physical memory, SQL Server constantly brings database pages in and out of the buffer
pool. The data files for SQL Server should be treated differently. Access to log files is
sequential except when a transaction needs to be rolled back where data files, including
tempdb , are randomly accessed. If you have a slow I/O subsystem, your users may
experience performance issues such as slow response times and tasks that do not
complete due to time-outs.
The Azure Marketplace virtual machines have log files on a physical disk that is separate
from the data files by default. The tempdb data files count and size meet best practices
and are targeted to the ephemeral D:\ drive.
The following PerfMon counters can help validate the IO throughput required by your
SQL Server:
Using IOPS and throughput requirements at peak levels, evaluate VM sizes that match
the capacity from your measurements.
If your workload requires 20K read IOPS and 10K write IOPS, you can either choose
E16s_v3 (with up to 32K cached and 25600 uncached IOPS) or M16_s (with up to 20K
cached and 10K uncached IOPS) with 2 P30 disks striped using Storage Spaces.
Make sure to understand both throughput and IOPS requirements of the workload as
VMs have different scale limits for IOPS and throughput.
Memory
Track both external memory used by the OS as well as the memory used internally by
SQL Server. Identifying pressure for either component will help size virtual machines and
identify opportunities for tuning.
The following PerfMon counters can help validate the memory health of a SQL Server
virtual machine:
\Memory\Available MBytes
\SQLServer:Memory Manager\Target Server Memory (KB)
\SQLServer:Memory Manager\Total Server Memory (KB)
\SQLServer:Buffer Manager\Lazy writes/sec
\SQLServer:Buffer Manager\Page life expectancy
Compute
Compute in Azure is managed differently than on-premises. On-premises servers are
built to last several years without an upgrade due to the management overhead and
cost of acquiring new hardware. Virtualization mitigates some of these issues but
applications are optimized to take the most advantage of the underlying hardware,
meaning any significant change to resource consumption requires rebalancing the
entire physical environment.
This is not a challenge in Azure where a new virtual machine on a different series of
hardware, and even in a different region, is easy to achieve.
In Azure, you want to take advantage of as much of the virtual machines resources as
possible, therefore, Azure virtual machines should be configured to keep the average
CPU as high as possible without impacting the workload.
The following PerfMon counters can help validate the compute health of a SQL Server
virtual machine:
7 Note
Ideally, try to aim for using 80% of your compute, with peaks above 90% but not
reaching 100% for any sustained period of time. Fundamentally, you only want to
provision the compute the application needs and then plan to scale up or down as
the business requires.
Next steps
To learn more, see the other articles in this best practices series:
Quick checklist
VM size
Storage
Security
HADR settings
For security best practices, see Security considerations for SQL Server on Azure Virtual
Machines.
Review other SQL Server Virtual Machine articles at SQL Server on Azure Virtual
Machines Overview. If you have questions about SQL Server virtual machines, see the
Frequently Asked Questions.
Run SQL Server VM on an Azure
Dedicated Host
Article • 07/10/2023
This article details the specifics of using a SQL Server virtual machine (VM) with Azure
Dedicated Host. Additional information about Azure Dedicated Host can be found in the
blog post Introducing Azure Dedicated Host .
Overview
Azure Dedicated Host is a service that provides physical servers - able to host one or
more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the
same physical servers used in Microsoft's data centers, provided as a resource. You can
provision dedicated hosts within a region, availability zone, and fault domain. Then, you
can place VMs directly into your provisioned hosts, in whatever configuration best
meets your needs.
Limitations
Not all VM series are supported on dedicated hosts, and VM series availability
varies by region. For more information, see Overview of Azure Dedicated Hosts.
Licensing
You can choose between two different licensing options when you place your SQL
Server VM in an Azure Dedicated Host.
SQL VM licensing: This is the existing licensing option, where you pay for each SQL
Server VM license individually.
Dedicated host licensing: The new licensing model available for the Azure
Dedicated Host, where SQL Server licenses are bundled and paid for at the host
level.
Provisioning
Provisioning a SQL Server VM to the dedicated host is no different than any other Azure
virtual machine. You can do so using Azure PowerShell, the Azure portal, and the Azure
CLI.
The process of adding an existing SQL Server VM to the dedicated host requires
downtime, but will not affect data, and will not have data loss. Nonetheless, all
databases, including system databases, should be backed up prior to the move.
Virtualization
One of the benefits of a dedicated host is unlimited virtualization. For example, you can
have licenses for 64 vCores, but you can configure the host to have 128 vCores, so you
get double the vCores but pay only half of what you would for the SQL Server licenses.
Because since it's your host, you are eligible to set the virtualization with a 1:2 ratio.
FAQ
Q: How does the Azure Hybrid Benefit work for Windows Server/SQL Server licenses
on Azure Dedicated Host?
A: Customers can use the value of their existing Windows Server and SQL Server licenses
with Software Assurance, or qualifying subscription licenses, to pay a reduced rate on
Azure Dedicated Host using Azure Hybrid Benefit. Windows Server Datacenter and SQL
Server Enterprise Edition customers get unlimited virtualization (deploy as many
Windows Server virtual machines as possible on the host subject to the physical capacity
of the underlying server) when they license the entire host and use Azure Hybrid Benefit.
All Windows Server and SQL Server workloads in Azure Dedicated Host are also eligible
for Extended Security Updates for Windows Server and SQL Server 2012 at no additional
charge.
Next steps
For more information, see the following articles:
SQL Server 2012 has reached the end of its support (EOS) life cycle. Because many
customers are still using this version, we're providing several options to continue getting
support. You can migrate your on-premises SQL Server instances to Azure virtual
machines (VMs), migrate to Azure SQL Database, or stay on-premises and purchase
extended security updates.
Unlike with a managed instance, migrating to an Azure VM does not require recertifying
your applications. And unlike with staying on-premises, you'll receive free extended
security patches by migrating to an Azure VM.
The rest of this article provides considerations for migrating your SQL Server instance to
an Azure VM.
For more information about end of support options, see End of support.
Provisioning
There is a pay-as-you-go SQL Server 2012 on Windows Server 2012 R2 image available
on Azure Marketplace.
7 Note
SQL Server 2008 and SQL Server 2008 R2 are out of extended support and no
longer available from the Azure Marketplace.
Customers who are on an earlier version of SQL Server will need to either self-install or
upgrade to SQL Server 2012. Likewise, customers on an earlier version of Windows
Server will need to either deploy their VM from a custom VHD or upgrade to Windows
Server 2012 R2.
Images deployed through Azure Marketplace come with the SQL IaaS Agent extension
pre-installed. The SQL IaaS Agent extension is a requirement for flexible licensing and
automated patching. Customers who deploy self-installed VMs will need to manually
install the SQL IaaS Agent extension.
7 Note
Although the SQL Server Create and Manage options will work with the SQL Server
2012 image in the Azure portal, the following features are not supported: Automatic
backups, Azure Key Vault integration, and R Services.
Licensing
Pay-as-you-go SQL Server 2012 deployments can convert to Azure Hybrid Benefit .
Self-installed SQL Server 2012 instances on an Azure VM can register with the SQL IaaS
Agent extension and convert their license type to pay-as-you-go.
Migration
You can migrate EOS SQL Server instances to an Azure VM with manual backup/restore
methods. This is the most common migration method from on-premises to an Azure
VM.
SQL Server backups: Use Azure Backup to help protect your EOS SQL Server 2012
against ransomware, accidental deletion, and corruption with a 15-minute RPO and
point-in-time recovery. For more details, see this article.
Log shipping: You can create a log shipping replica in another zone or Azure
region with continuous restores to reduce the RTO. You need to manually
configure log shipping.
Azure Site Recovery: You can replicate your VM between zones and regions
through Azure Site Recovery replication. SQL Server requires app-consistent
snapshots to guarantee recovery in case of a disaster. Azure Site Recovery offers a
minimum 1-hour RPO and a 2-hour (plus SQL Server recovery time) RTO for EOS
SQL Server disaster recovery.
Security patching
Extended security updates for SQL Server VMs are delivered through the Microsoft
Windows Update channels after the SQL Server VM has been registered with the SQL
IaaS Agent extension. Patches can be downloaded manually or automatically.
7 Note
Registration with the SQL IaaS Agent extension is not required for manual
installation of extended security updates on Azure virtual machines. Microsoft
Update will automatically detect that the VM is running in Azure and present the
relevant updates for download even if the extension is not present.
Azure Update management as of today does not detect patches for SQL Server
Marketplace images. You should look under Windows Updates to apply SQL Server
updates in this case.
Next steps
Migration guide: SQL Server to SQL Server on Azure Virtual Machines
Create a SQL Server VM in the Azure portal
FAQ for SQL Server on Azure Virtual Machines
Find out more about end of support options and Extended Security Updates.
Connect to a SQL Server virtual machine
on Azure
Article • 06/28/2023
Applies to:
SQL Server on Azure VM
Overview
This article describes how to connect to your SQL on Azure virtual machine (VM). It
covers some general connectivity scenarios. If you need to troubleshoot or configure
connectivity outside of the portal, see the manual configuration at the end of this topic.
If you would rather have a full walkthrough of both provisioning and connectivity, see
Provision a SQL Server virtual machine on Azure.
Connection scenarios
The way a client connects to a SQL Server VM differs depending on the location of the
client and the networking configuration.
If you provision a SQL Server VM in the Azure portal, you have the option of specifying
the type of SQL connectivity.
The following sections explain the Public and Private options in more detail.
) Important
The virtual machine images for the SQL Server Developer and Express editions do
not automatically enable the TCP/IP protocol. For Developer and Express editions,
you must use SQL Server Configuration Manager to manually enable the TCP/IP
protocol after creating the VM.
Any client with internet access can connect to the SQL Server instance by specifying
either the public IP address of the virtual machine or any DNS label assigned to that IP
address. If the SQL Server port is 1433, you do not need to specify it in the connection
string. The following connection string connects to a SQL VM with a DNS label of
sqlvmlabel.eastus.cloudapp.azure.com using SQL authentication (you could also use the
public IP address).
text
Server=sqlvmlabel.eastus.cloudapp.azure.com;Integrated Security=false;User
ID=<login_name>;Password=<your_password>
Although this string enables connectivity for clients over the internet, this does not
imply that anyone can connect to your SQL Server instance. Outside clients have to use
the correct username and password. However, for additional security, you can avoid the
well-known port 1433. For example, if you were to configure SQL Server to listen on port
1500 and establish proper firewall and network security group rules, you could connect
by appending the port number to the server name. The following example alters the
previous one by adding a custom port number, 1500, to the server name:
text
Server=sqlvmlabel.eastus.cloudapp.azure.com,1500;Integrated
Security=false;User ID=<login_name>;Password=<your_password>"
7 Note
When you query SQL Server on VM over the internet, all outgoing data from the
Azure datacenter is subject to normal pricing on outbound data transfers .
) Important
The virtual machine images for the SQL Server Developer and Express editions do
not automatically enable the TCP/IP protocol. For Developer and Express editions,
you must use SQL Server Configuration Manager to manually enable the TCP/IP
protocol after creating the VM.
Private connectivity is often used in conjunction with a virtual network, which enables
several scenarios. You can connect VMs in the same virtual network, even if those VMs
exist in different resource groups. And with a site-to-site VPN, you can create a hybrid
architecture that connects VMs with on-premises networks and machines.
Virtual networks also enable you to join your Azure VMs to a domain. This is the only
way to use Windows authentication to SQL Server. The other connection scenarios
require SQL authentication with user names and passwords.
Assuming that you have configured DNS in your virtual network, you can connect to
your SQL Server instance by specifying the SQL Server VM computer name in the
connection string. The following example also assumes that Windows authentication has
been configured and that the user has been granted access to the SQL Server instance.
text
Server=mysqlvm;Integrated Security=true
First, connect to the SQL Server virtual machine with remote desktop.
1. After the Azure virtual machine is created and running, select Virtual machine, and
then choose your new VM.
2. Select Connect and then choose RDP from the drop-down to download your RDP
file.
3. Open the RDP file that your browser downloads for the VM.
4. The Remote Desktop Connection notifies you that the publisher of this remote
connection cannot be identified. Click Connect to continue.
5. In the Windows Security dialog, click Use a different account. You might have to
click More choices to see this. Specify the user name and password that you
configured when you created the VM. You must add a backslash before the user
name.
6. Click OK to connect.
Next, enable the TCP/IP protocol with SQL Server Configuration Manager.
1. While connected to the virtual machine with remote desktop, search for
Configuration Manager:
2. In SQL Server Configuration Manager, in the console pane, expand SQL Server
Network Configuration.
3. In the console pane, click Protocols for MSSQLSERVER (the default instance
name.) In the details pane, right-click TCP and click Enable if it is not already
enabled.
4. In the console pane, click SQL Server Services. In the details pane, right-click SQL
Server (instance name) (the default instance is SQL Server (MSSQLSERVER)), and
then click Restart, to stop and restart the instance of SQL Server.
5. Close SQL Server Configuration Manager.
For more information about enabling protocols for the SQL Server Database Engine, see
Enable or Disable a Server Network Protocol.
7 Note
DNS Labels are not required if you plan to only connect to the SQL Server instance
within the same Virtual Network or only locally.
To create a DNS Label, first select Virtual machines in the portal. Select your SQL Server
VM to bring up its properties.
3. Enter a DNS Label name. This name is an A Record that can be used to connect to
your SQL Server VM by name instead of by IP Address directly.
2. In the Connect to Server or Connect to Database Engine dialog box, edit the
Server name value. Enter the IP address or full DNS name of the virtual machine
(determined in the previous task). You can also add a comma and provide SQL
Server's TCP port. For example, tutorial-sqlvm1.westus2.cloudapp.azure.com,1433 .
3. In the Authentication box, select SQL Server Authentication.
6. Select Connect.
The following table lists the requirements to connect to SQL Server on Azure VM.
Requirement Description
Enable SQL SQL Server authentication is needed to connect to the VM remotely unless you
Server have configured Active Directory on a virtual network.
authentication
mode
Create a SQL If you are using SQL authentication, you need a SQL login with a user name and
login password that also has permissions to your target database.
Enable firewall The firewall on the VM must allow inbound traffic on the SQL Server port
rule for the SQL (default 1433).
Server port
Requirement Description
Create a You must allow the VM to receive traffic on the SQL Server port (default 1433) if
network you want to connect over the internet. Local and virtual-network-only
security group connections do not require this. This is the only step required in the Azure
rule for TCP portal.
1433
Tip
The steps in the preceding table are done for you when you configure connectivity
in the portal. Use these steps only to confirm your configuration or to set up
connectivity manually for SQL Server.
Next steps
To see provisioning instructions along with these connectivity steps, see Provisioning a
SQL Server virtual machine on Azure.
For other topics related to running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines.
Provision SQL Server on Azure VM
(Azure portal)
Article • 03/27/2023
Applies to:
SQL Server on Azure VM
This article provides a detailed description of the available configuration options when
deploying your SQL Server on Azure Virtual Machines (VMs) by using the Azure portal.
For a quick guide, see the SQL Server VM quickstart instead.
Prerequisites
An Azure subscription. Create a free account to get started.
The Developer edition is used in this article because it is a full-featured, free edition of
SQL Server for development testing. You pay only for the cost of running the VM.
However, you are free to choose any of the images to use in this walkthrough. For a
description of available images, see the SQL Server Windows Virtual Machines overview.
Licensing costs for SQL Server are incorporated into the per-second pricing of the VM
you create and varies by edition and cores. However, SQL Server Developer edition is
free for development and testing, not production. Also, SQL Express is free for
lightweight workloads (less than 1 GB of memory, less than 10 GB of storage). You can
also bring-your-own-license (BYOL) and pay only for the VM. Those image names are
prefixed with {BYOL}. For more information on these options, see Pricing guidance for
SQL Server Azure VMs.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in
the list, select All services, then type Azure SQL in the search box. You can select
the star next to Azure SQL to save it as a favorite to pin it to the left-hand
navigation.
2. Select + Create to open the Select SQL deployment option page. Select the
Image drop-down and then type 2019 in the SQL Server image search box. Choose
a SQL Server image, such as Free SQL Server License: SQL 2019 on Windows
Server 2019 from the drop-down. Choose Show details for additional information
about the image.
3. Select Create.
Basic settings
The Basics tab allows you to select the subscription, resource group, and instance
details.
Using a new resource group is helpful if you are just testing or learning about SQL
Server deployments in Azure. After you finish with your test, delete the resource group
to automatically delete the VM and all resources associated with that resource group.
For more information about resource groups, see Azure Resource Manager Overview.
The estimated monthly cost displayed on the Choose a size window does not
include SQL Server licensing costs. This estimate is the cost of the VM alone. For the
Express and Developer editions of SQL Server, this estimate is the total estimated
cost. For other editions, see the Windows Virtual Machines pricing page and
select your target edition of SQL Server. Also see the Pricing guidance for SQL
Server Azure VMs and Sizes for virtual machines.
Under Inbound port rules, choose Allow selected ports and then select RDP
(3389) from the drop-down.
You also have the option to enable the Azure Hybrid Benefit to use your own SQL Server
license and save on licensing cost.
Disks
On the Disks tab, configure your disk options.
Under OS disk type, select the type of disk you want for your OS from the drop-
down. Premium is recommended for production systems but is not available for a
Basic VM. To use a Premium SSD, change the virtual machine size.
Under Advanced, select Yes under use Managed Disks.
Microsoft recommends Managed Disks for SQL Server. Managed Disks handles storage
behind the scenes. In addition, when virtual machines with Managed Disks are in the
same availability set, Azure distributes the storage resources to provide appropriate
redundancy. For more information, see Azure Managed Disks Overview. For specifics
about managed disks in an availability set, see Use managed disks for VMs in availability
set.
Networking
On the Networking tab, configure your networking options.
Create a new virtual network or use an existing virtual network for your SQL Server
VM. Designate a Subnet as well.
Under NIC network security group, select either a basic security group or the
advanced security group. Choosing the basic option allows you to select inbound
ports for the SQL Server VM which are the same values configured on the Basic
tab. Selecting the advanced option allows you to choose an existing network
security group, or create a new one.
You can make other changes to network settings, or keep the default values.
Management
On the Management tab, configure monitoring and auto-shutdown.
Azure enables Boot diagnostics by default with the same storage account
designated for the VM. On this tab, you can change these settings and enable OS
guest diagnostics.
You can also enable System assigned managed identity and auto-shutdown on
this tab.
Connectivity
Authentication
Azure Key Vault integration
Storage configuration
SQL instance settings
Automated patching
Automated backup
Machine Learning Services
Connectivity
Under SQL connectivity, specify the type of access you want to the SQL Server instance
on this VM. For the purposes of this walkthrough, select Public (internet) to allow
connections to SQL Server from machines or services on the internet. With this option
selected, Azure automatically configures the firewall and the network security group to
allow traffic on the port selected.
Tip
By default, SQL Server listens on a well-known port, 1433. For increased security,
change the port in the previous dialog to listen on a non-default port, such as
1401. If you change the port, you must connect using that port from any client
tools, such as SQL Server Management Studio (SSMS).
To connect to SQL Server via the internet, you also must enable SQL Server
Authentication, which is described in the next section.
If you would prefer to not enable connections to the Database Engine via the internet,
choose one of the following options:
Local (inside VM only) to allow connections to SQL Server only from within the
VM.
Private (within Virtual Network) to allow connections to SQL Server from
machines or services in the same virtual network.
In general, improve security by choosing the most restrictive connectivity that your
scenario allows. But all the options are securable through network security group (NSG)
rules and SQL/Windows Authentication. You can edit the NSG after the VM is created.
For more information, see Security Considerations for SQL Server in Azure Virtual
Machines.
Authentication
If you require SQL Server Authentication, select Enable under SQL Authentication on
the SQL Server settings tab.
7 Note
If you plan to access SQL Server over the internet (the Public connectivity option),
you must enable SQL Authentication here. Public access to the SQL Server requires
SQL Authentication.
If you enable SQL Server Authentication, specify a Login name and Password. This login
name is configured as a SQL Server Authentication login and a member of the sysadmin
fixed server role. For more information about Authentication Modes, see Choose an
Authentication Mode.
If you prefer not to enable SQL Server Authentication, you can use the local
Administrator account on the VM to connect to the SQL Server instance.
The following table lists the parameters required to configure Azure Key Vault (AKV)
Integration.
PARAMETER DESCRIPTION EXAMPLE
For more information, see Configure Azure Key Vault Integration for SQL Server on
Azure VMs.
Storage configuration
On the SQL Server settings tab, under Storage configuration, select Change
configuration to open the Configure storage page and specify storage requirements.
You can choose to leave the values at default, or you can manually change the storage
topology to suit your IOPS needs. For more information, see storage configuration.
Under Data storage, choose the location for your data drive, the disk type, and the
number of disks. You can also select the checkbox to store your system databases on
your data drive instead of the local C:\ drive.
Under Log storage, you can choose to use the same drive as the data drive for your
transaction log files, or you can choose to use a separate drive from the drop-down. You
can also choose the name of the drive, the disk type, and the number of disks.
Configure your tempdb database settings under Tempdb storage, such as the location of
the database files, as well as the number of files, initial size, and autogrowth size in MB.
Currently, the max number of tempdb files. Currently, during deployment, the max
number of tempdb files is 8, but more files can be added after the SQL Server VM is
deployed.
If you chose a free license image, such as the developer edition, the SQL Server license
option is grayed out.
Automated patching
Automated patching is enabled by default. Automated patching allows Azure to
automatically apply SQL Server and operating system security updates. Specify a day of
the week, time, and duration for a maintenance window. Azure performs patching in this
maintenance window. The maintenance window schedule uses the VM locale. If you do
not want Azure to automatically patch SQL Server and the operating system, select
Disable.
For more information, see Automated Patching for SQL Server in Azure Virtual Machines.
Automated backup
Enable automatic database backups for all databases under Automated backup.
Automated backup is disabled by default.
When you enable SQL automated backup, you can configure the following settings:
To encrypt the backup, select Enable. Then specify the Password. Azure creates a
certificate to encrypt the backups and uses the specified password to protect that
certificate.
Choose Select Storage Container to specify the container where you want to store your
backups.
By default the schedule is set automatically, but you can create your own schedule by
selecting Manual, which allows you to configure the backup frequency, backup time
window, and the log backup frequency in minutes.
For more information, see Automated Backup for SQL Server in Azure Virtual Machines.
Review + create
On the Review + create tab:
You can monitor the deployment from the Azure portal. The Notifications button at the
top of the screen shows basic status of the deployment.
7 Note
An example of time for Azure to deploy a SQL Server VM: A test SQL Server VM
provisioned to the East US region with default settings takes approximately 12
minutes to complete. You might experience faster or slower deployment times
based on your region and selected settings.
Open the VM with Remote Desktop
Use the following steps to connect to the SQL Server virtual machine with Remote
Desktop Protocol (RDP):
1. After the Azure virtual machine is created and running, select Virtual machine, and
then choose your new VM.
2. Select Connect and then choose RDP from the drop-down to download your RDP
file.
3. Open the RDP file that your browser downloads for the VM.
4. The Remote Desktop Connection notifies you that the publisher of this remote
connection cannot be identified. Click Connect to continue.
5. In the Windows Security dialog, click Use a different account. You might have to
click More choices to see this. Specify the user name and password that you
configured when you created the VM. You must add a backslash before the user
name.
6. Click OK to connect.
After you connect to the SQL Server virtual machine, you can launch SQL Server
Management Studio and connect with Windows Authentication using your local
administrator credentials. If you enabled SQL Server Authentication, you can also
connect with SQL Authentication using the SQL login and password you configured
during provisioning.
Access to the machine enables you to directly change machine and SQL Server settings
based on your requirements. For example, you could configure the firewall settings or
change SQL Server configuration settings.
7 Note
If you did not select Public during provisioning, then you can change your SQL
connectivity settings through the portal after provisioning. For more information,
see Change your SQL connectivity settings.
The following sections show how to connect over the internet to your SQL Server VM
instance.
7 Note
DNS Labels are not required if you plan to only connect to the SQL Server instance
within the same Virtual Network or only locally.
To create a DNS Label, first select Virtual machines in the portal. Select your SQL Server
VM to bring up its properties.
3. Enter a DNS Label name. This name is an A Record that can be used to connect to
your SQL Server VM by name instead of by IP Address directly.
2. In the Connect to Server or Connect to Database Engine dialog box, edit the
Server name value. Enter the IP address or full DNS name of the virtual machine
(determined in the previous task). You can also add a comma and provide SQL
Server's TCP port. For example, tutorial-sqlvm1.westus2.cloudapp.azure.com,1433 .
6. Select Connect.
7 Note
This example uses the common port 1433. However, this value will need to be
modified if a different port (such as 1401) was specified during the deployment of
the SQL Server VM.
Known Issues
Next steps
For other information about using SQL Server in Azure, see SQL Server on Azure Virtual
Machines and the Frequently Asked Questions.
How to use Azure PowerShell to
provision SQL Server on Azure Virtual
Machines
Article • 03/15/2023
Applies to:
SQL Server on Azure VM
This guide covers options for using PowerShell to provision SQL Server on Azure Virtual
Machines (VMs). For a streamlined Azure PowerShell example that relies on default
values, see the SQL VM Azure PowerShell quickstart.
If you don't have an Azure subscription, create a free account before you begin.
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
Connect-AzAccount
2. When prompted, enter your credentials. Use the same email and password that
you use to sign in to the Azure portal.
Modify as you want and then run these cmdlets to initialize these variables.
PowerShell
$Location = "SouthCentralUS"
$ResourceGroupName = "sqlvm2"
Storage properties
Define the storage account and the type of storage to be used by the virtual machine.
Modify as you want, and then run the following cmdlet to initialize these variables. We
recommend using premium SSDs for production workloads.
PowerShell
$StorageSku = "Premium_LRS"
Network properties
Define the properties to be used by the network in the virtual machine.
Network interface
TCP/IP allocation method
Virtual network name
Virtual subnet name
Range of IP addresses for the virtual network
Range of IP addresses for the subnet
Public domain name label
Modify as you want and then run this cmdlet to initialize these variables.
PowerShell
$TCPIPAllocationMethod = "Dynamic"
$SubnetName = "Default"
$VNetAddressPrefix = "10.0.0.0/16"
$VNetSubnetAddressPrefix = "10.0.0.0/24"
$DomainName = $ResourceGroupName
Modify as you want and then run this cmdlet to initialize these variables.
PowerShell
$VMSize = "Standard_DS13"
1. First, list all of the SQL Server image offerings with the Get-AzVMImageOffer
command. This command lists the current images that are available in the Azure
portal and also older images that can only be installed with PowerShell:
PowerShell
2. For this tutorial, use the following variables to specify SQL Server 2017 on
Windows Server 2016.
PowerShell
$OfferName = "SQL2017-WS2016"
$PublisherName = "MicrosoftSQLServer"
$Version = "latest"
PowerShell
4. For this tutorial, use the SQL Server 2017 Developer edition (SQLDEV). The
Developer edition is freely licensed for testing and development, and you only pay
for the cost of running the VM.
PowerShell
$Sku = "SQLDEV"
PowerShell
PowerShell
$StorageAccount = New-AzStorageAccount -ResourceGroupName $ResourceGroupName
`
Tip
7 Note
You can define additional properties of the virtual network subnet configuration
using this cmdlet, but that is beyond the scope of this tutorial.
PowerShell
PowerShell
7 Note
You can define additional properties of the public IP address using this cmdlet, but
that is beyond the scope of this initial tutorial. You could also create a private
address or an address with a static address, but that is also beyond the scope of
this tutorial.
PowerShell
1. First, create a network security group rule for remote desktop (RDP) to allow RDP
connections.
PowerShell
2. Configure a network security group rule that allows traffic on TCP port 1433. Doing
so enables connections to SQL Server over the internet.
PowerShell
PowerShell
-SecurityRules $NsgRuleRDP,$NsgRuleSQL
PowerShell
-NetworkSecurityGroupId $Nsg.Id
Configure a VM object
Now that storage and network resources are defined, you're ready to define compute
resources for the virtual machine.
Specify the virtual machine size and various operating system properties.
Specify the network interface that you previously created.
Define blob storage.
Specify the operating system disk.
PowerShell
Run the following cmdlet. You'll need to type the VM's local administrator name and
password into the PowerShell credential request window.
PowerShell
Run this cmdlet to set the operating system properties for your virtual machine.
PowerShell
-ProvisionVMAgent -EnableAutoUpdate
Run this cmdlet to set the network interface for your virtual machine.
PowerShell
PowerShell
Specify that the operating system for the virtual machine will come from an image.
Set caching to read only (because SQL Server is being installed on the same disk).
Specify the variables that you previously initialized for the VM name and the
operating system disk.
Run this cmdlet to set the operating system disk properties for your virtual machine.
PowerShell
Run this cmdlet to specify the platform image for your virtual machine.
PowerShell
Tip
PowerShell
New-AzVM -ResourceGroupName $ResourceGroupName -Location $Location -VM
$VirtualMachine
7 Note
If you get an error about boot diagnostics, you can ignore it. A standard storage
account is created for boot diagnostics because the specified storage account for
the virtual machine's disk is a premium storage account.
PowerShell
Stop or remove a VM
If you don't need the VM to run continuously, you can avoid unnecessary charges by
stopping it when not in use. The following command stops the VM but leaves it
available for future use.
PowerShell
You can also permanently delete all resources associated with the virtual machine with
the Remove-AzResourceGroup command. Doing so permanently deletes the virtual
machine as well, so use this command with care.
Example script
The following script contains the complete PowerShell script for this tutorial. It assumes
that you have already set up the Azure subscription to use with the Connect-AzAccount
and Select-AzSubscription commands.
PowerShell
# Variables
## Global
$Location = "SouthCentralUS"
$ResourceGroupName = "sqlvm2"
## Storage
$StorageSku = "Premium_LRS"
## Network
$SubnetName = "Default"
$VNetAddressPrefix = "10.0.0.0/16"
$VNetSubnetAddressPrefix = "10.0.0.0/24"
$TCPIPAllocationMethod = "Dynamic"
$DomainName = $ResourceGroupName
##Compute
$VMSize = "Standard_DS13"
##Image
$PublisherName = "MicrosoftSQLServer"
$OfferName = "SQL2017-WS2016"
$Sku = "SQLDEV"
$Version = "latest"
# Resource Group
# Storage
# Network
# Compute
# Image
# Add the SQL IaaS Agent Extension, and choose the license type
Next steps
After the virtual machine is created, you can:
Connect to the virtual machine using RDP
Configure SQL Server settings in the portal for your VM, including:
Storage settings
Automated management tasks
Configure connectivity
Connect clients and applications to the new SQL Server instance
Deploy SQL Server to an Azure
confidential VM
Article • 03/30/2023
Applies to:
SQL Server on Azure VM
In this article, learn how to deploy SQL Server to an Azure confidential VM.
Overview
Azure confidential VMs provide a strong, hardware-enforced boundary that hardens the
protection of the guest OS against host operator access. Choosing a confidential VM
size for your SQL Server on Azure VM provides an extra layer of protection, enabling you
to confidently store your sensitive data in the cloud and meet strict compliance
requirements.
Azure confidential VMs leverage AMD processors with SEV-SNP technology that encrypt
the memory of the VM using keys generated by the processor. This helps protect data
while it's in use (the data that is processed inside the memory of the SQL Server process)
from unauthorized access from the host OS. The OS disk of a confidential VM can also
be encrypted with keys bound to the Trusted Platform Module (TPM) chip of the virtual
machine, reinforcing protection for data-at-rest.
Azure confidential VMs are available in both the general purpose and memory
optimized VM size series.
Recommendations for disk encryption are different for confidential VMs than for the
other VM sizes. See disk encryption to learn more.
To deploy a SQL Server VM to a confidential Azure VM, select the following values when
deploying a SQL Server VM:
1. Choose a supported region. To validate region supportability, look for the ECadsv5-
series or DCadsv5-series in VM products Available by Azure region .
2. Set the Security type to Confidential virtual machines. If this option is grayed out,
it's likely the chosen region doesn't currently support confidential VMs. Choose a
different region from the drop-down.
3. Choose a supported confidential SQL Server image. To change the SQL Server
image, select See all images and then filter by Security type = Confidential VMs
to identify all SQL Server images that support confidential VMs.
4. Choose a supported VM size. To see all available sizes, select See all sizes to
identify all the VM sizes that support confidential VMs, as well as the sizes that
don't.
5. (Optional) Configure confidential disk encryption. Follow the steps in the Disk
section of the Quickstart.
Limitations
Currently, only the following list of bre-built SQL Server images support Azure
confidential VMs. If you wish to use a different combination of SQL Server
version/edition/operating system with Confidential VMs, you can deploy an image
of your choice and then self-install SQL Server.
SQL Server 2022 Enterprise / Developer / Standard / Web on Windows Server
2022 - x64 Gen 2
SQL Server 2019 Enterprise on Windows Server 2022 Database Engine Only -
x64 Gen 2 .
SQL Server 2017 Enterprise on Windows Server 2019 Database Engine Only -
x64 Gen 2
Confidential VMs aren't currently available in all regions. To validate region
supportability, look for the ECadsv5-series or DCadsv5-series in VM products
Available by Azure region .
Next steps
In this article, you learned to deploy SQL Server to a confidential virtual machine in the
Azure portal. To learn more about how to migrate your data to the new SQL Server, see
the following article.
Applies to:
SQL Server on Azure VM
The SQL virtual machines resource management point is different to the Virtual
machine resource used to manage the VM such as start it, stop it, or restart it.
Prerequisite
The SQL virtual machines resource is only available to SQL Server VMs that have been
registered with the SQL IaaS Agent extension.
4. (Optional): Select the star next to SQL virtual machines to add this option to your
Favorites menu.
5. Select SQL virtual machines.
6. The portal lists all SQL Server VMs available within the subscription. Select the one
that you want to manage to open the SQL virtual machines resource. Use the
search box if your SQL Server VM isn't appearing.
Selecting your SQL Server VM opens the SQL virtual machines resource:
Tip
The SQL virtual machines resource is for dedicated SQL Server settings. Select the
name of the VM in the Virtual machine box to open settings that are specific to the
VM, but not exclusive to SQL Server.
You can also modify the edition of SQL Server from the Configure page as well, such as
Enterprise, Standard, or Developer.
Changing the license and edition metadata in the Azure portal is only supported once
the version and edition of SQL Server has been modified internally to the VM. To learn
more see, change the version and edition of SQL Server on Azure VMs.
Storage
Use the Storage Configuration page of the SQL virtual machines resource to extend
your data, log, and tempdb drives. Review storage configuration to learn more.
It's also possible to modify your tempdb settings using the Storage configuration page,
such as the number of tempdb files, as well as the initial size, and the autogrowth ratio.
Select Configure next to tempdb to open the tempdb Configuration page.
Choose Yes next to Configure tempdb data files to modify your settings, and then
choose Yes next to Manage tempdb database folders on restart to allow Azure to
manage your tempdb configuration and implement your settings the next time your SQL
Server service starts:
Restart your SQL Server service to apply your changes.
Patching
Use the Patching page of the SQL virtual machines resource to enable auto patching of
your VM and automatically install Windows and SQL Server updates marked as
Important. You can also configure a maintenance schedule here, such as running
patching daily, as well as a local start time for maintenance, and a maintenance window.
Backups
Use the Backups page of the SQL virtual machines resource to configure your
automated backup settings, such as the retention period, which storage account to use,
encryption, whether or not to back up system databases, and a backup schedule.
To learn more, see SQL best practices assessment for SQL Server on Azure VMs.
Security Configuration
Use the Security Configuration page of the SQL virtual machines resource to configure
SQL Server security settings such as Azure Key Vault integration, least privilege mode or
if you're on SQL Server 2022, Azure Active Directory (Azure AD) authentication.
To learn more, see the Security best practices.
7 Note
The ability to change the connectivity and SQL Server authentication settings after
the SQL Server VM is deployed was removed from the Azure portal in April 2023.
You can still specify these settings during SQL Server VM deployment, or use SQL
Server Management Studio (SSMS) to update these settings manually from within
the SQL Server VM after deployment.
Next steps
For more information, see the following articles:
Applies to:
SQL Server on Azure VM
This article describes how to change the license model for a SQL Server virtual machine
(VM) in Azure by using the SQL IaaS Agent Extension.
Overview
There are three license models for an Azure VM that's hosting SQL Server: pay-as-you-
go, Azure Hybrid Benefit (AHB), and High Availability/Disaster Recovery(HA/DR). You can
modify the license model of your SQL Server VM by using the Azure portal, the Azure
CLI, or PowerShell.
The pay-as-you-go model means that the per-second cost of running the Azure
VM includes the cost of the SQL Server license.
Azure Hybrid Benefit allows you to use your own SQL Server license with a VM
that's running SQL Server.
The HA/DR license type is used for the free HA/DR replica in Azure.
Azure Hybrid Benefit allows the use of SQL Server licenses with Software Assurance
("Qualified License") on Azure virtual machines. With Azure Hybrid Benefit, customers
aren't charged for the use of a SQL Server license on a VM. But they must still pay for
the cost of the underlying cloud compute (that is, the base rate), storage, and backups.
They must also pay for I/O associated with their use of the services (as applicable).
To estimate your cost savings with the Azure Hybrid benefit, use the Azure Hybrid
Benefit Savings Calculator . To estimate the cost of Pay as you Go licensing, review the
Azure Pricing Calculator .
According to the Microsoft Product Terms : "Customers must indicate that they are
using Azure SQL Database (Managed Instance, Elastic Pool, and Single Database), Azure
Data Factory, SQL Server Integration Services, or SQL Server Virtual Machines under
Azure Hybrid Benefit for SQL Server when configuring workloads on Azure."
To indicate the use of Azure Hybrid Benefit for SQL Server on Azure VM and be
compliant, you have three options:
Provision a virtual machine by using a bring-your-own-license SQL Server image
from Azure Marketplace. This option is available only for customers who have an
Enterprise Agreement.
Provision a virtual machine by using a pay-as-you-go SQL Server image from
Azure Marketplace and activate the Azure Hybrid Benefit.
Self-install SQL Server on Azure VM, manually register with the SQL IaaS Agent
Extension, and activate Azure Hybrid Benefit.
The license type of SQL Server can be configured when the VM is provisioned, or
anytime afterward. Switching between license models incurs no downtime, does not
restart the VM or the SQL Server service, doesn't add any additional costs, and is
effective immediately. In fact, activating Azure Hybrid Benefit reduces cost.
Prerequisites
Changing the licensing model of your SQL Server VM has the following requirements:
An Azure subscription .
A SQL Server VM registered with the SQL IaaS Agent Extension.
Software Assurance is a requirement to utilize the Azure Hybrid Benefit license
type, but pay-as-you-go customers can use the HA/DR license type if the VM is
being used as a passive replica in a high availability/disaster recovery
configuration.
You can modify the license model directly from the portal:
1. Open the Azure portal and open the SQL virtual machines resource for your
SQL Server VM.
2. Select Configure under Settings.
3. Select the Azure Hybrid Benefit option, and select the check box to confirm
that you have a SQL Server license with Software Assurance.
4. Select Apply at the bottom of the Configure page.
Remarks
Azure Cloud Solution Provider (CSP) customers can use the Azure Hybrid Benefit
by first deploying a pay-as-you-go VM and then converting it to bring-your-own-
license, if they have active Software Assurance.
If you drop your SQL virtual machines resource, you will go back to the hard-coded
license setting of the image.
The ability to change the license model is a feature of the SQL IaaS Agent
Extension. Deploying an Azure Marketplace image through the Azure portal
automatically registers a SQL Server VM with the extension. But customers who are
self-installing SQL Server will need to manually register their SQL Server VM.
Adding a SQL Server VM to an availability set requires re-creating the VM. As such,
any VMs added to an availability set will go back to the default pay-as-you-go
license type. Azure Hybrid Benefit will need to be enabled again.
Limitations
Changing the license model is:
Only supported for the Standard and Enterprise editions of SQL Server. License
changes for Express, Web, and Developer are not supported.
Only supported for virtual machines deployed through the Azure Resource
Manager model. Virtual machines deployed through the classic model are not
supported.
Available only for the public or Azure Government clouds. Currently unavailable for
the Azure China region.
Additionally, changing the license model to Azure Hybrid Benefit requires Software
Assurance .
7 Note
To avoid being charged for your SQL Server instance, see Pricing guidance for SQL
Server on Azure VMs.
To remove a SQL Server instance and associated billing from a Pay-As-You-Go SQL
Server VM, or if you are being charged for a SQL instance after uninstalling it:
Optional
To disable the SQL Server Express edition service, disable service startup.
Known errors
Review the commonly known errors and their resolutions.
You'll need to register your SQL Server VM with the SQL IaaS Agent extension.
The SQL IaaS Agent extension is required to change the license. Make sure you remove
and reinstall the SQL IaaS Agent extension if it's in a failed state.
SQL Server edition, version, or licensing on Azure Portal does not reflect correctly
after edition or version upgrade
The SQL IaaS Agent extension is required to change the license. Make sure you repair
the extension if it's in a failed state.
Next steps
For more information, see the following articles:
Applies to:
SQL Server on Azure VM
This article describes how to change the edition of SQL Server on a Windows virtual
machine in Azure.
The edition of SQL Server is determined by the product key, and is specified during the
installation process using the installation media. The edition dictates what features are
available in the SQL Server product. You can change the SQL Server edition with the
installation media and either downgrade to reduce cost or upgrade to enable more
features.
Once the edition of SQL Server has been changed internally to the SQL Server VM, you
must then update the edition property of SQL Server in the Azure portal for billing
purposes.
Prerequisites
To do an in-place change of the edition of SQL Server, you need the following:
An Azure subscription .
A SQL Server VM on Windows registered with the SQL IaaS Agent extension.
Setup media with the desired edition of SQL Server. Customers who have Software
Assurance can obtain their installation media from the Volume Licensing
Center . Customers who don't have Software Assurance can deploy an Azure
Marketplace SQL Server VM image with the desired edition of SQL Server and then
copy the setup media (typically located in C:\SQLServerFull ) from it to their target
SQL Server VM.
Upgrade an edition
2 Warning
Upgrading the edition of SQL Server will restart the service for SQL Server, along
with any associated services, such as Analysis Services and R Services.
To upgrade the edition of SQL Server, obtain the SQL Server setup media for the desired
edition of SQL Server, and then do the following:
3. Select Next until you reach the Ready to upgrade edition page, and then select
Upgrade. The setup window might stop responding for a few minutes while the
change is taking effect. A Complete page will confirm that your edition upgrade is
finished.
4. After the SQL Server edition is upgraded, modify the edition property of the SQL
Server virtual machine in the Azure portal. This will update the metadata and
billing associated with this VM.
After you change the edition of SQL Server, register your SQL Server VM with the SQL
IaaS Agent extension again so that you can use the Azure portal to view the edition of
SQL Server. Then be sure to Change the edition of SQL Server in the Azure portal.
Downgrade an edition
To downgrade the edition of SQL Server, you need to completely uninstall SQL Server,
and reinstall it again with the desired edition setup media. You can get the setup media
by deploying a SQL Server VM from the marketplace image with your desired edition,
and then copying the setup media to the target SQL Server VM, or using the Volume
Licensing Center if you have software assurance.
2 Warning
You can downgrade the edition of SQL Server by following these steps:
After you change the edition of SQL Server, register your SQL Server VM with the SQL
IaaS Agent extension again so that you can use the Azure portal to view the edition of
SQL Server. Then be sure to Change the edition of SQL Server in the Azure portal.
Portal
To change the edition property of the SQL Server VM for billing purposes by using
the Azure portal, follow these steps:
4. Review the warning that says you must change the SQL Server edition first,
and that the edition property must match the SQL Server edition.
Remarks
The edition property for the SQL Server VM must match the edition of the SQL
Server instance installed for all SQL Server virtual machines, including both pay-as-
you-go and bring-your-own-license types of licenses.
If you drop your SQL Server VM resource, you will go back to the hard-coded
edition setting of the image.
The ability to change the edition is a feature of the SQL IaaS Agent extension.
Deploying an Azure Marketplace image through the Azure portal automatically
registers a SQL Server VM with the SQL IaaS Agent extension. However, customers
who are self-installing SQL Server will need to manually register their SQL Server
VM.
Adding a SQL Server VM to an availability set requires re-creating the VM. Any
VMs added to an availability set will go back to the default edition, and the edition
will need to be modified again.
Next steps
For more information, see the following articles:
Applies to:
SQL Server on Azure VM
This article describes how to change the version of Microsoft SQL Server on a Windows
virtual machine (VM) in Microsoft Azure.
2. We recommend that you check the compatibility certification for the version that
you are going to change to so that you can use the database compatibility modes
to minimize the effect of the upgrade.
3. You can review to the following articles to help ensure a successful outcome:
Video: Modernizing SQL Server | Pam Lahoud & Pedro Lopes | 20 Years of
PASS
Database Experimentation Assistant for AB testing
Upgrading Databases by using the Query Tuning Assistant
Change the Database Compatibility Level and use the Query Store
Prerequisites
To do an in-place upgrade of SQL Server, you need the following:
SQL Server installation media. Customers who have Software Assurance can
obtain their installation media from the Volume Licensing Center . Customers
who don't have Software Assurance can deploy an Azure Marketplace SQL Server
VM image with the desired version of SQL Server and then copy the setup media
(typically located in C:\SQLServerFull ) from it to their target SQL Server VM.
Version upgrades should follow the support upgrade paths.
Upgrade SQL Version
2 Warning
Upgrading the version of SQL Server will restart the service for SQL Server in
addition to any associated services, such as Analysis Services and R Services.
To upgrade the version of SQL Server, obtain the SQL Server setup media for the later
version that would support the upgrade path of SQL Server, and do the following steps:
1. Back up the databases, including system (except tempdb) and user databases,
before you start the process. You can also create an application-consistent VM-
level backup by using Azure Backup Services.
3. The Installation Wizard starts the SQL Server Installation Center. To upgrade an
existing instance of SQL Server, select Installation on the navigation pane, and
then select Upgrade from an earlier version of SQL Server.
4. On the Product Key page, select an option to indicate whether you are upgrading
to a free edition of SQL Server or you have a PID key for a production version of
the product. For more information, see Editions and supported features of SQL
Server 2019 (15.x) and Supported version and edition Upgrades (SQL Server 2016).
5. Select Next until you reach the Ready to upgrade page, and then select Upgrade.
The setup window might stop responding for several minutes while the change is
taking effect. A Complete page will confirm that your upgrade is completed. For a
step-by-step procedure to upgrade, see the complete procedure.
If you have changed the SQL Server edition in addition to changing the version, also
update the edition, and refer to the Verify Version and Edition in Portal section to
change the SQL VM instance.
Downgrade the version of SQL Server
To downgrade the version of SQL Server, you have to completely uninstall SQL Server,
and reinstall it again by using the desired version. This is similar to a fresh installation of
SQL Server because you will not be able to restore the earlier database from a later
version to the newly installed earlier version. The databases will have to be re-created
from scratch. If you also changed the edition of SQL Server during the upgrade, change
the Edition property of the SQL Server VM in the Azure portal to the new edition value.
This updates the metadata and billing that is associated with this VM.
2 Warning
You can downgrade the version of SQL Server by following these steps:
1. Make sure that you are not using any feature that is available in the later version
only .
2. Back up all databases, including system (except tempdb) and user databases.
3. Export all the necessary server-level objects (such as server triggers, roles, logins,
linked servers, jobs, credentials, and certificates).
4. If you do not have scripts to re-create your user databases on the earlier version,
you must script out all objects and export all data by using BCP.exe, SSIS, or
DACPAC.
Make sure that you select the correct options when you script such items as the
target version, dependent objects, and advanced options.
7. Install SQL Server by using the media for the desired version of the program.
9. Import all the necessary server-level objects (that were exported in Step 3).
10. Re-create all the necessary user databases from scratch (by using created scripts or
the files from Step 4).
Next steps
For more information, see the following articles:
Applies to:
SQL Server on Azure VM
This article teaches you how to configure your storage for your SQL Server on Azure
Virtual Machines (VMs).
SQL Server VMs deployed through marketplace images automatically follow default
storage best practices which can be modified during deployment. Some of these
configuration settings can be changed after deployment.
Prerequisites
To use the automated storage configuration settings, your virtual machine requires the
following characteristics:
New VMs
The following sections describe how to configure storage for new SQL Server virtual
machines.
Azure portal
When provisioning an Azure VM using a SQL Server gallery image, select Change
configuration under Storage on the SQL Server Settings tab to open the Configure
storage page. You can either leave the values at default, or modify the type of disk
configuration that best suits your needs based on your workload.
Choose the drive location for your data files and log files, specifying the disk type, and
number of disks. Use the IOPS values to determine the best storage configuration to
meet your business needs. Choosing premium storage sets the caching to ReadOnly for
the data drive, and None for the log drive as per SQL Server VM performance best
practices.
The disk configuration is completely customizable so that you can configure the storage
topology, disk type and IOPs you need for your SQL Server VM workload. You also have
the ability to use UltraSSD (preview) as an option for the Disk type if your SQL Server
VM is in one of the supported regions (East US 2, SouthEast Asia and North Europe) and
you've enabled ultra disks for your subscription.
Configure your tempdb database settings under Tempdb storage, such as the location of
the database files, as well as the number of files, initial size, and autogrowth size in MB.
Currently, during deployment, the max number of tempdb files is 8, but more files can be
added after the SQL Server VM is deployed.
Additionally, you have the ability to set the caching for the disks. Azure VMs have a
multi-tier caching technology called Blob Cache when used with Premium Disks. Blob
Cache uses a combination of the Virtual Machine RAM and local SSD for caching.
ReadOnly caching is highly beneficial for SQL Server data files that are stored on
Premium Storage. ReadOnly caching brings low read latency, high read IOPS, and
throughput as, reads are performed from cache, which is within the VM memory
and local SSD. These reads are much faster than reads from data disk, which is
from Azure Blob storage. Premium storage does not count the reads served from
cache towards the disk IOPS and throughput. Therefore, your applicable is able to
achieve higher total IOPS and throughput.
None cache configuration should be used for the disks hosting SQL Server Log file
as the log file is written sequentially and does not benefit from ReadOnly caching.
ReadWrite caching should not be used to host SQL Server files as SQL Server does
not support data consistency with the ReadWrite cache. Writes waste capacity of
the ReadOnly blob cache and latencies slightly increase if writes go through
ReadOnly blob cache layers.
Tip
increase the performance cap limitation by increasing the VM size. This will
not stop provisioning.
Based on your choices, Azure performs the following storage configuration tasks after
creating the VM:
For a full walkthrough of how to create a SQL Server VM in the Azure portal, see the
provisioning tutorial.
Quickstart template
You can use the following quickstart template to deploy a SQL Server VM using storage
optimization.
7 Note
Some VM sizes may not have temporary or local storage. If you deploy a SQL
Server on Azure VM without temporary storage, tempdb data and log files are
placed in the data folder.
Existing VMs
For existing SQL Server VMs, you can modify some storage settings in the Azure portal.
Open your SQL virtual machines resource, and select Overview. The SQL Server
Overview page shows the current storage usage of your VM. All drives that exist on
your VM are displayed in this chart. For each drive, the storage space displays in four
sections:
SQL data
SQL log
Other (non-SQL storage)
Available
You can modify the disk settings for the drives that were configured during the SQL
Server VM creation process. Selecting Configure opens the drive modification page,
allowing you to change the disk type, as well as add additional disks.
You can also configure the settings for tempdb directly from the Azure portal, such as the
number of data files, their initial size, and the autogrowth ratio. See configure tempdb
to learn more.
Automated changes
This section provides a reference for the storage configuration changes that Azure
automatically performs during SQL Server VM provisioning or configuration in the Azure
portal.
Azure configures a storage pool from storage selected from your VM. The next
section of this topic provides details about storage pool configuration.
Automatic storage configuration always uses premium SSDs P30 data disks.
Consequently, there is a 1:1 mapping between your selected number of Terabytes
and the number of data disks attached to your VM.
For pricing information, see the Storage pricing page on the Disk Storage tab.
Setting Value
Cache Read
Transactional Optimizes the storage for traditional database OLTP Trace Flag
processing workloads 1117
Trace Flag
1118
Data warehousing Optimizes the storage for analytic and reporting Trace Flag 610
7 Note
You can only specify the workload type when you provision a SQL Server virtual
machine by selecting it in the storage configuration step.
Enable caching
Change the caching policy at the disk level. You can do so using the Azure portal,
PowerShell, or the Azure CLI.
To change your caching policy in the Azure portal, follow these steps:
5. After the change takes effect, reboot the SQL Server VM and start the SQL Server
service.
If your disks are striped, enable Write Acceleration for each disk individually, and your
Azure VM should be shut down before making any changes.
To enable Write Acceleration using the Azure portal, follow these steps:
1. Stop your SQL Server service. If your disks are striped, shut down the virtual
machine.
4. Choose the cache option with Write Accelerator for your disk from the drop-
down.
5. After the change takes effect, start the virtual machine and SQL Server service.
Disk striping
For more throughput, you can add additional data disks and use disk striping. To
determine the number of data disks, analyze the throughput and bandwidth required
for your SQL Server data files, including the log and tempdb. Throughput and
bandwidth limits vary by VM size. To learn more, see VM Size
For Windows 8/Windows Server 2012 or later, use Storage Spaces with the
following guidelines:
For example, the following PowerShell creates a new storage pool with the interleave
size to 64 KB and the number of columns equal to the amount of physical disk in the
storage pool:
PowerShell
If you are using Storage Spaces Direct (S2D) with SQL Server Failover Cluster
Instances, you must configure a single pool. Although different volumes can be
created on that single pool, they will all share the same characteristics, such as the
same caching policy.
Determine the number of disks associated with your storage pool based on your
load expectations. Keep in mind that different VM sizes allow different numbers of
attached data disks. For more information, see Sizes for virtual machines.
Known issues
Configure on the Storage Configuration blade can be grayed out if you've customized
your storage pool, or if you are using a non-Marketplace image.
Next steps
For other topics related to running SQL Server in Azure VMs, see SQL Server on Azure
Virtual Machines.
Enable Azure AD authentication for SQL
Server on Azure VMs
Article • 05/25/2023
Applies to:
SQL Server on Azure VM
This article teaches you to enable Azure Active Directory (Azure AD) authentication for
your SQL Server on Azure virtual machines (VMs).
Overview
Starting with SQL Server 2022, you can connect to SQL Server on Azure VMs using one
of the following Azure AD identity authentication methods:
Azure AD Password
Azure AD Integrated
Azure AD Universal with Multi-Factor Authentication
Azure Active Directory access token
When you create an Azure AD login for SQL Server and when a user logs into SQL
Server using the Azure AD login, SQL Server uses a managed identity to query Microsoft
Graph. When you enable Azure AD authentication for your SQL Server on Azure VM, you
need to provide a managed identity that SQL Server can use to communicate with Azure
AD. This managed identity needs to have permission to query Microsoft Graph.
When enabling a managed identity for a resource in Azure, the security boundary of the
identity is the resource to which it's attached. For example, the security boundary for a
virtual machine with managed identities for Azure resources enabled is the virtual
machine. Any code running on that VM is able to call the managed identities endpoint
and request tokens. When enabling a managed identity for SQL Server on Azure VMs,
the identity is attached to the virtual machine, so the security boundary is the virtual
machine. The experience is similar when working with other resources that support
managed identities. For more information, read the Managed Identities FAQ.
Azure AD authentication with SQL Server on Azure VMs uses either a system-assigned
VM managed identity, or a user-assigned managed identity, which offer the following
benefits:
To get started with managed identities, review Configure managed identities using the
Azure portal.
Prerequisites
To enable Azure AD authentication on your SQL Server, you need the following
prerequisites:
Grant permissions
The managed identity you choose to facilitate authentication between SQL Server and
Azure AD has to have the following three Microsoft Graph application permissions (app
roles): User.ReadALL , GroupMember.Read.All , and Application.Read.All .
Alternatively, adding the managed identity to the Azure AD Directory Readers role
grants sufficient permissions. Another way to assign the Directory Readers role to a
managed identity is to assign the Directory Readers role to a group in Azure AD. The
group owners can then add the Virtual Machine managed identity as a member of this
group. This minimizes involving Azure AD Global administrators and delegates the
responsibility to the group owners.
2. On the Azure Active Directory overview page, choose Roles and administrators
under Manage:
3. Type Directory readers in the search box, and then select the role Directory readers
to open the Directory Readers | Assignments page:
7. Verify that you see your chosen identity under Select members and then select
Next.
8. Verify that your assignment type is set to Active and the box next to Permanently
assigned is checked. Enter a business justification, such as Adding Directory Reader
role permissions to the system-assigned identity for VM2 and then select Assign to
save your settings and go back to the Directory Readers | Assignments page.
9. On the Directory Readers | Assignments page, confirm you see your newly added
identity under Directory Readers.
Add app role permissions
You can use Azure PowerShell to grant app roles to a managed identity. To do so, follow
these steps:
PowerShell
PowerShell
PowerShell
PowerShell
$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq
"GroupMember.Read.All"}
PowerShell
You can validate permissions were assigned to the managed identity by doing the
following:
Outbound communication from SQL Server to Azure AD and the Microsoft Graph
endpoint.
Outbound communication from the SQL client to Azure AD.
Firewalls on the SQL Server VM and any SQL client need to allow outbound traffic on
ports 80 and 443.
The Azure VNet NSG rule for the VNet that hosts your SQL Server VM should have the
following:
A Service Tag of AzureActiveDirectory .
Destination port ranges of: 80, 443.
Action set to Allow.
A high priority (which is a low number).
7 Note
After Azure AD authentication is enabled, you can follow the same steps in this
section to change the configuration to use a different managed identity.
Portal
To enable Azure AD authentication to your SQL Server VM, follow these steps:
4. Choose the managed identity type from the drop-down, either System-
assigned or User-assigned. If you choose user-assigned, then select the
identity you want to use to authenticate to SQL Server on your Azure VM from
the User-assigned managed identity drop-down that appears.
After Azure AD has been enabled, you can follow the same steps to change which
managed identity can authenticate to your SQL Server VM.
7 Note
The error The selected managed identity does not have enough permissions
for Azure AD Authentication indicates that permissions have not been
properly assigned to the identity you've selected. Check the Grant permissions
section to assign proper permissions.
Limitations
Consider the following limitations:
Azure AD authentication is only supported with Windows SQL Server 2022 VMs
registered with the SQL IaaS Agent extension and deployed to the public cloud.
The identity you choose to authenticate to SQL Server has to have either the Azure
AD Directory Readers role permissions or the following three Microsoft Graph
application permissions (app roles): User.ReadALL , GroupMember.Read.All , and
Application.Read.All .
Next steps
Review the security best practices for SQL Server.
For other articles related to running SQL Server in Azure VMs, see SQL Server on Azure
Virtual Machines overview. If you have questions about SQL Server virtual machines, see
the Frequently asked questions.
To learn more, see the other articles in this best practices series:
Quick checklist
VM size
Storage
HADR settings
Collect baseline
Automated Patching for SQL Server on
Azure virtual machines
Article • 03/30/2023
Applies to:
SQL Server on Azure VM
) Important
Only Windows and SQL Server updates marked as Important or Critical are
installed. Other SQL Server updates, such as service packs and cumulative updates
that are not marked as Important or Critical, must be installed manually.
Prerequisites
To use Automated Patching, you need the following prerequisites:
Automated Patching relies on the SQL Server IaaS Agent Extension. Current SQL
virtual machine gallery images add this extension by default. For more information,
review SQL Server IaaS Agent Extension.
Install the latest Azure PowerShell commands if you plan to configure Automated
Patching by using PowerShell.
Automated Patching is supported starting with SQL Server 2008 R2 on Windows Server
2008 R2.
There are also several other ways to enable automatic patching of Azure VMs, such
as Update Management or Automatic VM guest patching. Choose only one option
to automatically update your VM as overlapping tools may lead to failed updates.
If you want to receive ESU updates without using the automated patching feature,
you can use the built-in Windows Update channel.
For SQL Server VMs in different availability zones that participate in an Always On
availability group, configure the automated patching schedule so that availability
replicas in different availability zones aren't patched at the same time.
Settings
The following table describes the options that can be configured for Automated
Patching. The actual configuration steps vary depending on whether you use the Azure
portal or Azure Windows PowerShell commands.
Maintenance Everyday, Monday, Tuesday, The schedule for downloading and installing
schedule Wednesday, Thursday, Friday, Windows, SQL Server, and Microsoft updates
Saturday, Sunday for your virtual machine.
New VMs
Use the Azure portal to configure Automated Patching when you create a new SQL
Server virtual machine in the Resource Manager deployment model.
On the SQL Server settings tab, select Change configuration under Automated
patching. The following Azure portal screenshot shows the SQL Automated Patching
blade.
For more information, see Provision a SQL Server virtual machine on Azure.
Existing VMs
For existing SQL Server virtual machines, open your SQL virtual machines resource and
select Patching under Settings.
When you're finished, select the OK button on the bottom of the SQL Server
configuration blade to save your changes.
If you're enabling Automated Patching for the first time, Azure configures the SQL
Server IaaS Agent in the background. During this time, the Azure portal might not show
that Automated Patching is configured. Wait several minutes for the agent to be
installed and configured. After that the Azure portal reflects the new settings.
Azure PowerShell
$vmname = "vmname"
$resourcegroupname = "resourcegroupname"
Based on this example, the following table describes the practical effect on the target
Azure VM:
Parameter Effect
It could take several minutes to install and configure the SQL Server IaaS Agent.
To disable Automated Patching, run the same script without the -Enable parameter to
the New-AzVMSqlServerAutoPatchingConfig. The absence of the -Enable parameter
signals the command to disable the feature.
Next steps
For information about other available automation tasks, see SQL Server IaaS Agent
Extension.
For more information about running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines overview.
SQL best practices assessment for SQL
Server on Azure VMs
Article • 03/15/2023
Applies to:
SQL Server on Azure VM
The SQL best practices assessment feature of the Azure portal identifies possible
performance issues and evaluates that your SQL Server on Azure Virtual Machines (VMs)
is configured to follow best practices using the rich ruleset provided by the SQL
Assessment API.
Overview
Once the SQL best practices assessment feature is enabled, your SQL Server instance
and databases are scanned to provide recommendations for things like indexes,
deprecated features, enabled or missing trace flags, statistics, etc. Recommendations are
surfaced to the SQL VM management page of the Azure portal .
Assessment results are uploaded to your Log Analytics workspace using Microsoft
Monitoring Agent (MMA). If your VM is already configured to use Log Analytics, the SQL
best practices assessment feature uses the existing connection. Otherwise, the MMA
extension is installed to the SQL Server VM and connected to the specified Log Analytics
workspace.
Assessment run time depends on your environment (number of databases, objects, and
so on), with a duration from a few minutes, up to an hour. Similarly, the size of the
assessment result also depends on your environment. Assessment runs against your
instance and all databases on that instance. In our testing, we observed that an
assessment run can have up to 5-10% CPU impact on the machine. In these tests, the
assessment was done while a TPC-C like application was running against the SQL Server.
Prerequisites
To use the SQL best practices assessment feature, you must have the following
prerequisites:
Your SQL Server VM must be registered with the SQL Server IaaS extension.
A Log Analytics workspace in the same subscription as your SQL Server VM to
upload assessment results to.
SQL Server needs to be 2012 or higher version.
Enable
You can enable SQL best practices assessments using the Azure portal or the Azure CLI.
Azure portal
To enable SQL best practices assessments using the Azure portal, follow these steps:
1. Sign into the Azure portal and go to your SQL Server VM resource .
2. Select SQL best practices assessments under Settings.
3. Select Enable SQL best practices assessments or Configuration to navigate to
the Configuration page.
4. Check the Enable SQL best practices assessments box and provide the
following:
a. The Log Analytics workspace that assessments will be uploaded to. If the
SQL Server VM has not been associated with a workspace previously, then
choose an existing workspace in the subscription from the drop-down.
Otherwise, the previously-associated workspace is already populated.
b. The Run schedule. You can choose to run assessments on demand, or
automatically on a schedule. If you choose a schedule, then provide the
frequency (weekly or monthly), day of week, recurrence (every 1-6 weeks),
and the time of day your assessments should start (local to VM time).
5. Select Apply to save your changes and deploy the Microsoft Monitoring
Agent to your SQL Server VM if it's not deployed already. An Azure portal
notification will tell you once the SQL best practices assessment feature is
ready for your SQL Server VM.
On a schedule
On demand
Azure portal
Azure portal
To run an on-demand assessment by using the Azure portal, select Run assessment
from the SQL best practices assessment blade of the Azure portal SQL Server VM
resource page.
View results
The Assessments results section of the SQL best practices assessments page shows a
list of the most recent assessment runs. Each row displays the start time of a run and the
status - scheduled, running, uploading results, completed, or failed. Each assessment run
has two parts: evaluates your instance, and uploads the results to your Log Analytics
workspace. The status field covers both parts. Assessment results are shown in Azure
workbooks.
Select the View latest successful assessment button on the SQL best practices
assessments page.
Choose a completed run from the Assessment results section of the SQL best
practices assessments page.
Select View assessment results from the Top 10 recommendations surfaced on the
Overview page of your SQL VM resource page.
Once you have the workbook open, you can use the drop-down to select previous runs.
You can view the results of a single run using the Results page or review historical
trends using the Trends page.
Results page
The Results page organizes the recommendations using tabs for All, new, resolved. Use
these tabs to view all recommendations from the current run, all the new
recommendations (the delta from previous runs), or resolved recommendations from
previous runs. Tabs help you track progress between runs. The Insights tab identifies the
most recurring issues and the databases with the most issues. Use these to decide
where to concentrate your efforts.
The graph groups assessment results in different categories of severity - high, medium,
low, and information. Select each category to see the list of recommendations, or search
for key phrases in the search box. It's best to start with the most severe
recommendations and go down the list.
The first grid shows you each recommendation and the number of instances your
environment hit that issue. When you select a row in the first grid, the second grid lists
all the instances for that particular recommendation. If there is no selection in the first
grid, the second grid shows all recommendations. Potentially this could be a big list. You
can use the drop downs above the grid (Name, Severity, Tags, Check Id) to filter the
results. You can also use Export to Excel and Open the last run query in the Logs view
options by selecting the small icons on the top right corner of each grid.
The passed section of the graph identifies recommendations your system already
follows.
View detailed information for each recommendation by selecting the Message field,
such as a long description, and relevant online resources.
Trends page
There are three charts on the Trends page to show changes over time: all issues, new
issues, and resolved issues. The charts help you see your progress. Ideally, the number
of recommendations should go down while the number of resolved issues goes up. The
legend shows the average number of issues for each severity level. Hover over the bars
to see the individual vales for each run.
If there are multiple runs in a single day, only the latest run is included in the graphs on
the Trends page.
azure-cli
# This script is formatted for use with Az CLI on Windows PowerShell. You
may need to update the script for use with Az CLI on other shells.
# This script enables SQL best practices assessment feature for all SQL
Servers on Azure VMs in a given subscription. It configures the VMs to use a
Log Analytics workspace to upload assessment results. It sets a schedule to
start an assessment run every Sunday at 11pm (local VM time).
$subscriptionId = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
$myWsRg = 'myWsRg'
$myWsName = 'myWsName'
# Alternatively you can use this command to only enable the feature
without setting a schedule
Known Issues
You may encounter some of the following known issues when using SQL best practices
assessments.
Failed assessments
If the assessment or uploading the results failed for some reason, the status of that run
will indicate the failure. Clicking on the status will open a context pane where you can
see the details about the failure and possible ways to remediate the issue.
Tip
If you have enforced TLS 1.0 or higher in Windows and disabled older SSL protocols
as described here, then you must also ensure that .NET Framework is configured to
use strong cryptography.
Next steps
To register your SQL Server VM with the SQL Server IaaS extension to SQL Server
on Azure VMs, see the articles for Automatic installation, Single VMs, or VMs in
bulk.
To learn about more capabilities available by the SQL Server IaaS extension to SQL
Server on Azure VMs, see Manage SQL Server VMs by using the Azure portal
Configure Azure Key Vault integration
for SQL Server on Azure VMs (Resource
Manager)
Article • 03/15/2023
Applies to:
SQL Server on Azure VM
There are multiple SQL Server encryption features, such as transparent data encryption
(TDE), column level encryption (CLE), and backup encryption. These forms of encryption
require you to manage and store the cryptographic keys you use for encryption. The
Azure Key Vault service is designed to improve the security and management of these
keys in a secure and highly available location. The SQL Server Connector enables SQL
Server to use these keys from Azure Key Vault.
If you are running SQL Server on-premises, there are steps you can follow to access
Azure Key Vault from your on-premises SQL Server instance. But for SQL Server on Azure
VMs, you can save time by using the Azure Key Vault Integration feature.
7 Note
The Azure Key Vault integration is available only for the Enterprise, Developer, and
Evaluation Editions of SQL Server. Starting with SQL Server 2019, Standard edition is
also supported.
When this feature is enabled, it automatically installs the SQL Server Connector,
configures the EKM provider to access Azure Key Vault, and creates the credential to
allow you to access your vault. If you looked at the steps in the previously mentioned
on-premises documentation, you can see that this feature automates steps 2 and 3. The
only thing you would still need to do manually is to create the key vault and keys. From
there, the entire setup of your SQL Server VM is automated. Once this feature has
completed this setup, you can execute Transact-SQL (T-SQL) statements to begin
encrypting your databases or backups as you normally would.
7 Note
You can also configure Key Vault integration by using a template. For more
information, see Azure quickstart template for Azure Key Vault integration .
Prepare for AKV Integration
To use Azure Key Vault Integration to configure your SQL Server VM, there are several
prerequisites:
The following sections describe these prerequisites and the information you need to
collect to later run the PowerShell cmdlets.
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
Next, register an application with AAD. This will give you a Service Principal account that
has access to your key vault, which your VM will need. In the Azure Key Vault article, you
can find these steps in the Register an application with Azure Active Directory section, or
you can see the steps with screenshots in the Get an identity for the application
section of this blog post. Before completing these steps, you need to collect the
following information during this registration that is needed later when you enable
Azure Key Vault Integration on your SQL VM.
After the application is added, find the Application ID (also known as AAD ClientID
or AppID) on the Registered app blade.
The application ID is assigned later to the
$spName (Service Principal name) parameter in the PowerShell script to enable
Azure Key Vault Integration.
During these steps when you create your key, copy the secret for your key as is
shown in the following screenshot. This key secret is assigned later to the
$spSecret (Service Principal secret) parameter in the PowerShell script.
The application ID and the secret will also be used to create a credential in SQL
Server.
You must authorize this new application ID (or client ID) to have the following
access permissions: get, wrapKey, unwrapKey. This is done with the Set-
AzKeyVaultAccessPolicy cmdlet. For more information, see Azure Key Vault
overview.
When you get to the Create a key vault step, note the returned vaultUri property, which
is the key vault URL. In the example provided in that step, shown below, the key vault
name is ContosoKeyVault, therefore the key vault URL would be
https://contosokeyvault.vault.azure.net/ .
The key vault URL is assigned later to the $akvURL parameter in the PowerShell script to
enable Azure Key Vault Integration.
After the key vault is created, we need to add a key to the key vault, this key will be
referred when we create an asymmetric key create in SQL Server later.
7 Note
Extensible Key Management (EKM) Provider version 1.0.4.0 is installed on the SQL
Server VM through the SQL infrastructure as a service (IaaS) extension. Upgrading
the SQL IaaS Agent extension will not update the provider version. Please
considering manually upgrading the EKM provider version if needed (for example,
when migrating to a SQL Managed Instance).
New VMs
If you are provisioning a new SQL virtual machine with Resource Manager, the Azure
portal provides a way to enable Azure Key Vault integration.
For a detailed walkthrough of provisioning, see Provision a SQL virtual machine in the
Azure portal.
Existing VMs
For existing SQL virtual machines, open your SQL virtual machines resource and select
Security under Settings. Select Enable to enable Azure Key Vault integration.
The following screenshot shows how to enable Azure Key Vault in the portal for an
existing SQL Server VM (this SQL Server instance uses a non-default port 1401):
When you're finished, select the Apply button on the bottom of the Security page to
save your changes.
7 Note
The credential name we created here will be mapped to a SQL login later. This
allows the SQL login to access the key vault.
After enabling Azure Key Vault Integration, you can enable SQL Server encryption on
your SQL VM. First, you will need to create an asymmetric key inside your key vault and
a symmetric key within SQL Server on your VM. Then, you will be able to execute T-SQL
statements to enable encryption for your databases and backups.
There are several forms of encryption you can take advantage of:
The following Transact-SQL scripts provide examples for each of these areas.
USE master;
GO
--create credential
--The <<SECRET>> here requires the <Application ID> (without hyphens) and
<Secret> to be passed together without a space between them.
SECRET = '<<SECRET>>'
--Map the credential to a SQL login that has sysadmin permissions. This
allows the SQL login to access the key vault when creating the asymmetric
key in the next step.
CREATION_DISPOSITION = OPEN_EXISTING;
SQL
USE master;
-- encrypted by TDE.
GO
-- Alter the TDE Login to add the credential for use by the
GO
2. Create the database encryption key that will be used for TDE.
SQL
USE ContosoDatabase;
GO
GO
GO
Encrypted backups
1. Create a SQL Server login to be used by the Database Engine for encrypting
backups, and add the credential to it.
SQL
USE master;
GO
-- Alter the Encrypted Backup Login to add the credential for use by
GO
2. Backup the database specifying encryption with the asymmetric key stored in the
key vault.
SQL
USE master;
GO
SQL
WITH ALGORITHM=AES_256
--Encrypt syntax
-- Decrypt syntax
Additional resources
For more information on how to use these encryption features, see Using EKM with SQL
Server Encryption Features.
Note that the steps in this article assume that you already have SQL Server running on
an Azure virtual machine. If not, see Provision a SQL Server virtual machine in Azure. For
other guidance on running SQL Server on Azure VMs, see SQL Server on Azure Virtual
Machines overview.
Next steps
For more security information, review Security considerations for SQL Server on Azure
VMs.
Migrate log disk to Ultra disk
Article • 08/31/2022
Applies to:
SQL Server on Azure VM
Azure ultra disks deliver high throughput, high IOPS, and consistently low latency disk
storage for SQL Server on Azure Virtual Machine (VM).
This article teaches you to migrate your log disk to an ultra SSD to take advantage of
the performance benefits offered by ultra disks.
Back up database
Complete a full backup up of your database.
Attach disk
Attach the Ultra SSD to your virtual machine once you have enabled ultradisk
compatibility on the VM.
Ultra disk is supported on a subset of VM sizes and regions. Before proceeding, validate
that your VM is in a region, zone, and size that supports ultra disk. You can determine
and validate VM size and region using the Azure CLI or PowerShell.
Enable compatibility
To enable compatibility, follow these steps:
5. Select Save.
Attach disk
Use the Azure portal to attach an ultra disk to your virtual machine. For details, see
Attach an ultra disk.
Once the disk is attached, start your VM once more using the Azure portal.
Format disk
Connect to your virtual machine and format your ultra disk.
Configure permissions
1. Verify the service account used by SQL Server. You can do so by using SQL Server
Configuration Manager or Services.msc.
2. Navigate to your new disk.
3. Create a folder (or multiple folders) to be used for your log file.
4. Right-click the folder and select Properties.
5. On the Security tab, grant full control access to the SQL Server service account.
6. Select OK to save your settings.
7. Repeat this for every root-level folder where you plan to have SQL data.
U Caution
Detaching the database will take it offline, closing connections and rolling back any
transactions that are in-flight. Proceed with caution and during a down-time
maintenance window.
Transact-SQL (T-SQL)
1. Connect to your database in SQL Server Management Studio and open a New
Query window.
SQL
USE AdventureWorks
GO
sp_helpfile
GO
SQL
USE master
GO
sp_detach_db 'AdventureWorks'
GO
4. Use file explorer to move the log file to the new location on the ultra disk.
SQL
sp_attach_db 'AdventureWorks'
'E:\Fixed_FG\AdventureWorks.mdf',
'E:\Fixed_FG\AdventureWorks_2.ndf',
'F:\New_Log\AdventureWorks_log.ldf'
GO
At this point, the database comes online with the log in the new location.
Next steps
Review the performance best practices for additional settings to improve performance.
For an overview of SQL Server on Azure Virtual Machines, see the following articles:
Applies to:
SQL Server on Azure VM
By default, Azure VMs with SQL Server 2016 or later are automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service. You can enable the
automatic registration feature for your subscription to easily and automatically register
any SQL Server VMs not picked up by the CEIP service, such as older versions of SQL
Server.
This article teaches you to enable the automatic registration feature. Alternatively, you
can register a single VM, or register your VMs in bulk with the SQL IaaS Agent extension.
7 Note
SQL Server VMs deployed via the Azure marketplace after October 2022 have the
least privileged model enabled by default.
Management modes for the SQL IaaS
Agent extension were removed in March 2023.
Overview
Register your SQL Server VM with the SQL IaaS Agent extension to unlock a full feature
set of benefits.
By default, Azure VMs with SQL Server 2016 or later are automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service with limited
functionality. You can use the automatic registration feature to automatically register
any SQL Server VMs not identified by the CEIP service. The license type automatically
defaults to that of the VM image. If you use a pay-as-you-go image for your VM, then
your license type will be PAYG , otherwise your license type will be AHUB by default. For
information about privacy, see the SQL IaaS Agent extension privacy statements.
Once automatic registration is enabled for a subscription all current and future VMs that
have SQL Server installed are registered with the SQL IaaS Agent extension. This is done
by running a monthly job that detects whether or not SQL Server is installed on all the
unregistered VMs in the subscription. For unregistered VMs, the job installs the SQL IaaS
Agent extension binaries to the VM, then runs a one-time utility to check for the SQL
Server registry hive. If the SQL Server hive is detected, the virtual machine is registered
with the extension. If no SQL Server hive exists in the registry, the binaries are removed.
U Caution
If the SQL Server hive is not present in the registry, removing the binaries
might be impacted if there are resource locks in place.
If you deployed a SQL Server VM with a marketplace image which has the SQL
IaaS Agent extension preinstalled, and the extension is in a failed state or it
was removed, automatic registration checks the registry to see if SQL Server is
installed on the VM and then registers it with the extension.
Move all pay-as-you-go (full price) SQL PaaS/IaaS workloads to take advantage of
your Azure Hybrid Benefits without have to individually configure them to enable
the benefit.
Ensure that all your SQL workloads are licensed in compliance with the existing
license agreements.
Separate the license compliance management roles from devops roles using RBAC
Take advantage of free business continuity by ensuring that your passive & disaster
recovery (DR) environments are properly identified.
Use MSDN licenses in Azure for non-production environments.
CM-AHB uses data provided by the SQL IaaS Agent extension to account for the
number of SQL Server licenses used by individual Azure VMs and provides
recommendations to the billing admin during the license assignment process. Using the
recommendations ensures that you get the maximum discount by using Azure Hybrid
Benefit. If your VMs aren't registered with the SQL IaaS Agent extension when CM-AHB
is enabled by your billing admin, the service won't receive the full usage data from your
Azure subscriptions and therefore the CM-AHB recommendations will be inaccurate.
) Important
If automatic registration is activated after CM-AHB is enabled, you run the risk of
unnecessary pay-as-you-go charges for your SQL Server on Azure VM workloads.
To mitigate this risk, adjust your license assignments in CM-AHB to account for the
additional usage that will be reported by the SQL IaaS Agent extension after auto-
registration. We published an open source tool that provides insights into the
utilization of SQL Server licenses, including the utilization by the SQL Servers on
Azure Virtual Machines that are not yet registered with the SQL IaaS Agent
extension.
Prerequisites
To enable automatic registration of your SQL Server VM with the extension, you'll need:
An Azure subscription .
The client credentials used to register the virtual machines to exist in any of the
following Azure roles: Virtual Machine contributor, Contributor, or Owner.
Once automatic registration is enabled, SQL Server VMs are registered if they:
Are deployed using an Azure Resource Model to a Windows Server 2008 R2 (or
later) virtual machine. Windows Server 2008 isn't supported.
Have SQL Server installed.
Are deployed to the public or Azure Government cloud. Other clouds aren't
currently supported.
7 Note
6. Select Register to enable the feature and automatically register all current and
future SQL Server VMs with the SQL IaaS Agent extension. This won't restart the
SQL Server service on any of the VMs.
Azure CLI
To disable automatic registration using Azure CLI, run the following command:
Azure CLI
Console
Console
.\EnableBySubscription.ps1
Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Manually register a single VM
Troubleshoot known issues with the extension.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.
Applies to:
SQL Server on Azure VM
Register your SQL Server VM with the SQL IaaS Agent extension to unlock a wealth of
feature benefits for your SQL Server on Azure Windows VM.
This article teaches you to register a single SQL Server VM with the SQL IaaS Agent
extension. Alternatively, you can register all SQL Server VMs in a subscription
automatically or multiple VMs in bulk using a script.
7 Note
SQL Server VMs deployed via the Azure marketplace after October 2022 have the
least privileged model enabled by default.
Management modes for the SQL IaaS
Agent extension were removed in March 2023.
Overview
Registering with the SQL Server IaaS Agent extension creates the SQL virtual machine
resource within your subscription, which is a separate resource from the virtual machine
resource. Unregistering your SQL Server VM from the extension removes the SQL virtual
machine resource but won't drop the actual virtual machine.
Deploying a SQL Server VM Azure Marketplace image through the Azure portal
automatically registers the SQL Server VM with the extension. However, if you choose to
self-install SQL Server on an Azure virtual machine, or provision an Azure virtual
machine from a custom VHD, then you must register your SQL Server VM with the SQL
IaaS Agent extension to unlock full feature benefits and manageability. By default, Azure
VMs that have SQL Server 2016 or later installed will be automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service. See the SQL Server
privacy supplement for more information. For information about privacy, see the SQL
IaaS Agent extension privacy statements.
To utilize the SQL IaaS Agent extension, you must first register your subscription with
the Microsoft.SqlVirtualMachine provider, which gives the SQL IaaS Agent extension
the ability to create resources within that specific subscription. Then you can register
your SQL Server VM with the extension.
Prerequisites
To register your SQL Server VM with the extension, you'll need:
An Azure subscription .
An Azure Resource Model Windows Server 2008 (or greater) virtual machine with
SQL Server 2008 (or greater) deployed to the public or Azure Government cloud.
The client credentials used to register the virtual machine exists in any of the
following Azure roles: Virtual Machine contributor, Contributor, or Owner.
The latest version of Azure CLI or Azure PowerShell (5.0 minimum).
A minimum of .NET Framework 4.5.1 or later.
To verify that none of the limitations apply to you.
Azure portal
Register your subscription with the resource provider by using the Azure portal:
Provide the SQL Server license type as either pay-as-you-go ( PAYG ) to pay per usage,
Azure Hybrid Benefit ( AHUB ) to use your own license, or disaster recovery ( DR ) to
activate the free DR replica license.
Azure portal
It's not currently possible to register your SQL Server VM with the SQL IaaS Agent
extension by using the Azure portal.
Azure portal
3. Select your SQL Server VM from the list. If your SQL Server VM isn't listed
here, it likely hasn't been registered with the SQL IaaS Agent extension.
4. View the value under Status. If Status is Succeeded, then the SQL Server VM
has been registered with the SQL IaaS Agent extension successfully.
Alternatively, you can check the status by choosing Repair under the Support +
troubleshooting pane in the SQL virtual machine resource. The provisioning state
for the SQL IaaS Agent extension can be Succeeded or Failed.
An error indicates that the SQL Server VM hasn't been registered with the extension.
U Caution
Use extreme caution when unregistering your SQL Server VM from the extension.
Follow the steps carefully because it is possible to inadvertently delete the virtual
machine when attempting to remove the resource.
Azure portal
Unregister your SQL Server VM from the extension using the Azure portal:
4. Type the name of the SQL virtual machine and clear the check box next to the
virtual machine.
2 Warning
Failure to clear the checkbox next to the virtual machine name will delete
the virtual machine entirely. Clear the checkbox to unregister the SQL
Server VM from the extension but not delete the actual virtual machine.
5. Select Delete to confirm the deletion of the SQL virtual machine resource, and
not the SQL Server VM.
Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Automatically register all VMs in a subscription.
Troubleshoot known issues with the extension.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.
Applies to:
SQL Server on Azure VM
This article describes how to register your SQL Server virtual machines (VMs) in bulk in
Azure with the SQL IaaS Agent extension by using the Register-SqlVMs Azure PowerShell
cmdlet.
Alternatively, you can register all SQL Server VMs automatically or individual SQL Server
VMs manually.
7 Note
SQL Server VMs deployed via the Azure marketplace after October 2022 have the
least privileged model enabled by default.
Management modes for the SQL IaaS
Agent extension were removed in March 2023.
Overview
The Register-SqlVMs cmdlet can be used to register all virtual machines in a given list of
subscriptions, resource groups, or a list of specific virtual machines. The cmdlet will
register the virtual machines and then generate both a report and a log file.
The registration process carries no risk, has no downtime, and will not restart the SQL
Server service or the virtual machine.
By default, Azure VMs with SQL Server 2016 or later are automatically registered with
the SQL IaaS Agent extension when detected by the CEIP service. You can use bulk
registration to register any SQL Server VMs that are not detected by the CEIP service.
For information about privacy, see the SQL IaaS Agent extension privacy statements.
Prerequisites
To register your SQL Server VM with the extension, you'll need the following:
Get started
Before proceeding, you must first create a local copy of the script, import it as a
PowerShell module, and connect to Azure.
Open an administrative PowerShell terminal and navigate to where you saved the
RegisterSqlVMs.psm1 file. Then, run the following PowerShell cmdlet to import the script
as a module:
PowerShell
Import-Module .\RegisterSqlVMs.psm1
Connect to Azure
Use the following PowerShell cmdlet to connect to Azure:
PowerShell
Connect-AzAccount
Example output:
Number of VMs skipped as they are not running SQL Server On Windows: 1
PowerShell
Example output:
Number of VMs skipped as they are not running SQL Server On Windows: 1
PowerShell
Example output:
Number of VMs skipped as they are not running SQL Server On Windows: 1
PowerShell
Example output:
PowerShell
Example output:
Number of VMs skipped as they are not running SQL Server On Windows: 1
A specific VM
Use the following cmdlet to register a specific SQL Server virtual machine:
PowerShell
Example output:
Output description
Both a report and a log file are generated every time the Register-SqlVMs cmdlet is
used.
Report
The report is generated as a .txt file named
RegisterSqlVMScriptReport<Timestamp>.txt where the timestamp is the time when the
Number of subscriptions This provides the number and list of subscriptions that had issues
registration failed for with the provided authentication. The detailed error can be found in
because you do not have the log by searching for the subscription ID.
access or credentials are
incorrect
Number of subscriptions This section contains the count and list of subscriptions that have not
that could not be tried been registered to the SQL IaaS Agent extension.
because they are not
registered to the resource
provider
Total VMs found The count of virtual machines that were found in the scope of the
parameters passed to the cmdlet.
VMs already registered The count of virtual machines that were skipped as they were already
registered with the extension.
Number of VMs The count of virtual machines that were successfully registered after
registered successfully running the cmdlet. Lists the registered virtual machines in the
format SubscriptionID, Resource Group, Virtual Machine .
Number of VMs failed to Count of virtual machines that failed to register due to some error.
register due to error The details of the error can be found in the log file.
Number of VMs skipped Count and list of virtual machines that could not be registered as
as the VM or the gust either the virtual machine or the guest agent on the virtual machine
agent on VM is not were not running. These can be retried once the virtual machine or
running guest agent has been started. Details can be found in the log file.
Output value Description
Number of VMs skipped Count of virtual machines that were skipped as they are not running
as they are not running SQL Server or are not a Windows virtual machine. The virtual
SQL Server on Windows machines are listed in the format SubscriptionID, Resource Group,
Virtual Machine .
Log
Errors are logged in the log file named VMsNotRegisteredDueToError<Timestamp>.log ,
where timestamp is the time when the script started. If the error is at the subscription
level, the log contains the comma-separated Subscription ID and the error message. If
the error is with the virtual machine registration, the log contains the Subscription ID,
Resource group name, virtual machine name, error code, and message separated by
commas.
Remarks
When you register SQL Server VMs with the extension by using the provided script,
consider the following:
Registration with the extension requires a guest agent running on the SQL Server
VM. Windows Server 2008 images do not have a guest agent, so these virtual
machines will fail and must be registered manually with limited functionality.
There is retry logic built-in to overcome transparent errors. If the virtual machine is
successfully registered, then it is a rapid operation. However, if the registration
fails, then each virtual machine will be retried. As such, you should allow significant
time to complete the registration process - though actual time requirement is
dependent on the type and number of errors.
Full script
For the full script on GitHub, see Bulk register SQL Server VMs with Az PowerShell .
Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Manually register a single VM
Automatically register all VMs in a subscription.
Troubleshoot known issues with the extension.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.
Applies to:
SQL Server on Azure VM
This article helps you resolve known issues and troubleshoot errors when using the SQL
Server IaaS agent extension.
For answers to frequently asked questions about the extension, check out the FAQ.
Check prerequisites
To avoid errors due to unsupported options or limitations, verify the prerequisites for
the extension.
If you repair, or reinstall the SQL IaaS Agent extension, your setting won't be preserved,
other than licensing changes. If you've repaired or reinstalled the extension, you'll have
to reconfigure automated backup, automated patching, and any other services you had
configured prior to the repair or reinstall.
Repair extension
It's possible for your SQL IaaS Agent extension to be in a failed state. Use the Azure
portal to repair the SQL IaaS Agent extension.
3. Select your SQL Server VM from the list. If your SQL Server VM isn't listed here, it
likely hasn't been registered with the SQL IaaS Agent extension.
5. If your provisioning state shows as Failed, choose Repair to repair the extension. If
your state is Succeeded you can check the box next to Force repair to repair the
extension regardless of state.
SQL IaaS Agent extension registration fails with
error "Creating SQL Virtual Machine resource
for PowerBI VM images is not supported"
Note that SQL IaaS Agent extension registration is blocked and not supported on
PowerBI VM, SQL Server Reporting Server and SQL Server Analysis Service Images
deployed from Azure Marketplace.
The SQL virtual machines resource is not in a valid state for management
SQL management operations are disabled because the state of underlying virtual
machine is invalid
The SQL VM may be stopped, deallocated, in a failed state, or not found. Validate
the underlying virtual machine is running.
Your SQL IaaS Agent extension may be in a failed state. Repair the extension.
Unregister your SQL VM from the extension and then register the SQL VM with the
extension again if you did any of the following:
Migrated your VM from one subscription to the other.
Changed the locale or collation of SQL Server.
Changed the version of your SQL Server instance.
Changed the edition of your SQL Server instance.
Provisioning failed
Repair the extension if the SQL IaaS Agent extension status shows as Provisioning failed
in the Azure portal.
Microsoft SQL Server IaaS agent is the main service for the SQL IaaS Agent
extension and should run under the Local System account.
Microsoft SQL Server IaaS Query Service is a helper service that helps the
extension run queries within SQL Server and should run under the NT Service
account NT Service\SqlIaaSExtensionQuery .
Next steps
Review the benefits provided by the SQL IaaS Agent extension.
Manually register a single VM
Automatically register all VMs in a subscription.
Review the SQL IaaS Agent extension privacy statements.
Review the best practices checklist to optimize for performance and security.
Applies to:
SQL Server on Azure VM
This article teaches you how to use Azure Site Recovery to migrate your SQL Server
virtual machine (VM) from one region to another within Azure.
1. Preparing: Confirm that both your source SQL Server VM and target region are
adequately prepared for the move.
2. Configuring: Moving your SQL Server VM requires that it is a replicated object
within the Azure Site Recovery vault. You need to add your SQL Server VM to the
Azure Site Recovery vault.
3. Testing: Migrating the SQL Server VM requires failing it over from the source
region to the replicated target region. To ensure that the move process will
succeed, you need to first test that your SQL Server VM can successfully fail over to
the target region. This will help expose any issues and avoid them when
performing the actual move.
4. Moving: Once your test failover passed, and you know that you are safe to migrate
your SQL Server VM, you can perform the move of the VM to the target region.
5. Cleaning up: To avoid billing charges, remove the SQL Server VM from the vault,
and any unnecessary resources that are left over in the resource group.
Verify prerequisites
Confirm that moving from your source region to your target region is supported.
Review the scenario architecture and components as well as the support limitations
and requirements.
Verify account permissions. If you created your free Azure account, you're the
administrator of your subscription. If you're not the subscription administrator,
work with the administrator to assign the permissions that you need. To enable
replication for a VM and copy data using Azure Site Recovery, you must have:
Permissions to create a VM. The Virtual Machine Contributor built-in role has
these permissions, which include:
Permissions to create a VM in the selected resource group.
Permissions to create a VM in the selected virtual network.
Permissions to write to the selected storage account.
Permissions to manage Azure Site Recovery operations. The Site Recovery
Contributor role has all the permissions that are required to manage Site
Recovery operations in a Recovery Services vault.
Moving the SQL virtual machines resource is not supported. You need to reinstall
the SQL IaaS Agent extension on the target region where you have planned your
move. If you are moving your resources between subscriptions or tenants, make
sure you've registered your subscription with the resource provider before
attempting to register your migrated SQL Server VM with the SQL IaaS Agent
extension.
Prepare to move
Prepare both the source SQL Server VM and the target region for the move.
2. Choose to Create a resource from the upper-left hand corner of the navigation
pane.
3. Select IT & Management tools and then select Backup and Site Recovery.
4. On the Basics tab, under Project details, either create a new resource group in the
target region, or select an existing resource group in the target region.
5. Under Instance Details, specify a name for your vault, and then select your target
Region from the drop-down.
8. (Optionally) Select the star next to Recovery Services vaults to add it to your quick
navigation bar.
9. Select Recovery services vaults and then select the Recovery Services vault you
created.
11. Select Source and then select Azure as the source. Select the appropriate values
for the other drop-down fields, such as the location for your source VMs. Only
resources groups located in the Source location region will be visible in the Source
resource group field.
12. Select Virtual machines and then choose the virtual machines you want to
migrate. Select OK to save your VM selection.
13. Select Settings, and then choose your Target location from the drop-down. This
should be the resource group you prepared earlier.
14. Once you have customized replication, select Create target resources to create the
resources in the new location.
15. Once resource creation is complete, select Enable replication to start replication of
your SQL Server VM from the source to the target region.
16. You can check the status of replication by navigating to your recovery vault,
selecting Replicated items and viewing the Status of your SQL Server VM. A status
of Protected indicates that replication has completed.
Test move process
The following steps show you how to use Azure Site Recovery to test the move process.
1. Navigate to your Recovery Services vault in the Azure portal and select
Replicated items.
2. Select the SQL Server VM you would like to move, verify that the Replication
Health shows as Healthy and then select Test Failover.
3. On the Test Failover page, select the Latest app-consistent recovery point to use
for the failover, as that is the only type of snapshot that can guarantee SQL Server
data consistency.
4. Select the virtual network under Azure virtual network and then select OK to test
failover.
) Important
We recommend that you use a separate Azure VM network for the failover
test. Don't use the production network that was set up when you enabled
replication and that you want to move your VMs into eventually.
5. To monitor progress, navigate to your vault, select Site Recovery jobs under
Monitoring, and then select the Test failover job that's in progress.
6. Once the test completes, navigate to Virtual machines in the portal and review the
newly created virtual machine. Make sure the SQL Server VM is running, is sized
appropriately, and is connected to the appropriate network.
7. Delete the VM that was created as part of the test, as the Failover option will be
grayed out until the failover test resources are cleaned up. Navigate back to the
vault, select Replicated items, select the SQL Server VM, and then select Cleanup
test failover. Record and save any observations associated with the test in the
Notes section and select the checkbox next to Testing is complete. Delete test
failover virtual machines. Select OK to clean up resources after the test.
Move the SQL Server VM
The following steps show you how to move the SQL Server VM from your source region
to your target region.
1. Navigate to the Recovery Services vault, select Replicated items, select the VM,
and then select Failover.
3. Select the check box next to Shut down the machine before beginning failover.
Site Recovery will attempt to shut down the source VM before triggering the
failover. Failover will continue even if shut down fails.
5. You can monitor the failover process from the same Site Recovery jobs page you
viewed when monitoring the failover test in the previous section.
6. After the job completes, check that the SQL Server VM appears in the target region
as expected.
7. Navigate back to the vault, select Replicated Items, select the SQL Server VM, and
select Commit to finish the move process to the target region. Wait until the
commit job finishes.
8. Register your SQL Server VM with the SQL IaaS Agent extension to enable SQL
virtual machine manageability in the Azure portal and features associated with the
extension. For more information, see Register SQL Server VM with the SQL IaaS
Agent extension.
2 Warning
SQL Server data consistency is only guaranteed with app-consistent snapshots. The
latest processed snapshot can't be used for SQL Server failover as a crash recovery
snapshot can't guarantee SQL Server data consistency.
1. Navigate back to the Site Recovery vault, select Replicated items, and select the
SQL Server VM.
2. Select Disable Replication. Select a reason for disabling protection, and then select
OK to disable replication.
) Important
It is important to perform this step to avoid being charged for Azure Site
Recovery replication.
3. If you have no plans to reuse any of the resources in the source region, delete all
relevant network resources, and corresponding storage accounts.
Next steps
For more information, see the following articles:
Overview of SQL Server on a Windows VM
SQL Server on a Windows VM FAQ
SQL Server on a Windows VM pricing guidance
What's new for SQL Server on Azure VMs
Configure cluster quorum for SQL
Server on Azure VMs
Article • 11/09/2022
Applies to:
SQL Server on Azure VM
This article teaches you to configure one of the three quorum options for a Windows
Server Failover Cluster running on SQL Server on Azure Virtual Machines (VMs) - a disk
witness, a cloud witness, and a file share witness.
Overview
The quorum for a cluster is determined by the number of voting elements that must be
part of active cluster membership for the cluster to start properly or continue running.
Configuring a quorum resource allows a two-node cluster to continue with only one
node online. The Windows Server Failover Cluster is the underlying technology for the
SQL Server on Azure VMs high availability options: failover cluster instances (FCIs) and
availability groups (AGs).
The disk witness is the most resilient quorum option, but to use a disk witness on a SQL
Server on Azure VM, you must use an Azure shared disk which imposes some limitations
to the high availability solution. As such, use a disk witness when you're configuring
your failover cluster instance with Azure shared disks, otherwise use a cloud witness
whenever possible. If you are using Windows Server 2012 R2 or older which does not
support cloud witness, you can use a file share witness.
The following quorum options are available to use for SQL Server on Azure VMs:
To learn more about quorum, see the Windows Server Failover Cluster overview.
Cloud witness
A cloud witness is a type of failover cluster quorum witness that uses Microsoft Azure
storage to provide a vote on cluster quorum.
The following table provides additional information and considerations about the cloud
witness:
When configuring a Cloud Witness quorum resource for your Failover Cluster, consider:
Instead of storing the Access Key, your Failover Cluster will generate and securely
store a Shared Access Security (SAS) token.
The generated SAS token is valid as long as the Access Key remains valid. When
rotating the Primary Access Key, it is important to first update the Cloud Witness
(on all your clusters that are using that Storage Account) with the Secondary
Access Key before regenerating the Primary Access Key.
Cloud Witness uses HTTPS REST interface of the Azure Storage Account service.
This means it requires the HTTPS port to be open on all cluster nodes.
Once your storage account is created, follow these steps to configure your cloud witness
quorum resource for your failover cluster:
PowerShell
You can configure cloud witness with the cmdlet Set-ClusterQuorum using the
PowerShell command:
PowerShell
In the rare instance you need to use a different endpoint, use this PowerShell
command:
PowerShell
See the cloud witness documentation for help for finding the Storage Account
AccessKey.
Disk witness
A disk witness is a small clustered disk in the Cluster Available Storage group. This disk is
highly available and can fail over between nodes.
The disk witness is the recommended quorum option when used with a shared storage
high availability solution, such as the failover cluster instance with Azure shared disks.
The following table provides additional information and considerations about the
quorum disk witness:
Witness Description Requirements and recommendations
type
Disk Dedicated LUN that stores a Size of LUN must be at least 512 MB
witness copy of the cluster database Must be dedicated to cluster use and
Most useful for clusters with not assigned to a clustered role
shared (not replicated) storage Must be included in clustered storage
and pass storage validation tests
Can't be a disk that is a Cluster Shared
Volume (CSV)
Basic disk with a single volume
Doesn't need to have a drive letter
Can be formatted with NTFS or ReFS
Can be optionally configured with
hardware RAID for fault tolerance
Should be excluded from backups and
antivirus scanning
A Disk witness isn't supported with
Storage Spaces Direct
To use an Azure shared disk for the disk witness, you must first create the disk and
mount it. To do so, follow the steps in the Mount disk section of the Azure shared disk
failover cluster instance guide. The disk does not need to be premium.
After your disk has been mounted, add it to the cluster storage with the following steps:
After your disk has been added as clustered storage, configure it as the disk witness
using PowerShell:
Use the path for the file share as the parameter for the disk witness when using the
PowerShell cmdlet Set-ClusterQuorum:
PowerShell
You can also use the Failover Cluster manager; follow the same steps as for the cloud
witness, but choose the disk witness as the quorum option instead.
Configure a file share witness if a disk witness or a cloud witness are unavailable or
unsupported in your environment.
The following table provides additional information and considerations about the
quorum file share witness:
After your file share has been properly configured and mounted, use PowerShell to add
the file share as the quorum witness resource:
PowerShell
You will be prompted for an account and password for a local (to the file share) non-
admin account that has full admin rights to the share. The cluster will keep the name
and password encrypted and not accessible by anyone.
You can also use the Failover Cluster manager; follow the same steps as for the cloud
witness, but choose the file share witness as the quorum option instead.
Start with each node having no vote by default. Each node should only have a vote with explicit
justification.
Enable votes for cluster nodes that host the primary replica of an availability group, or the
preferred owners of a failover cluster instance.
Enable votes for automatic failover owners. Each node that may host a primary replica or FCI as a
result of an automatic failover should have a vote.
If an availability group has more than one secondary replica, only enable votes for the replicas
that have automatic failover.
Disable votes for nodes that are in secondary disaster recovery sites. Nodes in secondary sites
should not contribute to the decision of taking a cluster offline if there's nothing wrong with the
primary site.
Qurom voting guidelines
Have an odd number of votes, with three quorum votes minimum. Add a quorum witness for an
additional vote if necessary in a two-node cluster.
Reassess vote assignments post-failover. You don't want to fail over into a cluster configuration
that doesn't support a healthy quorum.
Next Steps
To learn more, see:
Applies to:
SQL Server on Azure VM
Automated Backup automatically configures Managed Backup to Microsoft Azure for all
existing and new databases on an Azure VM running SQL Server 2016 or later Standard,
Enterprise, or Developer editions. This enables you to configure regular database
backups that utilize durable Azure Blob Storage. Automated Backup depends on the
SQL Server infrastructure as a service (IaaS) Agent Extension.
Prerequisites
To use Automated Backup, review the following prerequisites:
Operating system:
7 Note
For SQL Server 2014, see Automated Backup for SQL Server 2014.
Database configuration:
Target user databases must use the full recovery model. System databases don't
have to use the full recovery model. However, if you require log backups to be
taken for model or msdb , you must use the full recovery model. For more
information about the impact of the full recovery model on backups, see Backup
under the full recovery model.
The SQL Server VM has been registered with the SQL IaaS Agent extension and the
Automated Backup feature is enabled. Since Automated Backup relies on the
extension, Automated Backup is only supported on target databases from the
default instance, or a single named instance. If there's no default instance, and
multiple named instances, the SQL IaaS Agent extension fails and Automated
Backup won't work.
Settings
The following table describes the options that can be configured for Automated Backup.
The actual configuration steps vary depending on whether you use the Azure portal or
Azure Windows PowerShell commands. Automated Backup uses backup compression by
default and it can't be disabled.
Basic Settings
Storage Azure storage An Azure storage account to use for storing Automated Backup
Account account files in blob storage. A container is created at this location to store
all backup files. The backup file naming convention includes the
date, time, and database GUID.
Password Password text A password for encryption keys. This password is only required if
encryption is enabled. In order to restore an encrypted backup,
you must have the correct password and related certificate that
was used at the time the backup was taken.
Advanced Settings
System Enable/Disable When enabled, this feature also backs up the system
Database (Disabled) databases: master , msdb , and model . For the msdb and model
Backups databases, verify that they are in full recovery mode if you
want log backups to be taken. Log backups are never taken for
master , and no backups are taken for tempdb .
Full Daily/Weekly Frequency of full backups. In both cases, full backups begin
backup during the next scheduled time window. When weekly is
frequency selected, backups could span multiple days until all databases
have successfully backed up.
Full 00:00 – 23:00 Start time of a given day during which full backups can take
backup (01:00) place.
start time
Full 1 – 23 hours (1 Duration of the time window of a given day during which full
backup hour) backups can take place.
time
window
When this happens, Automated Backup begins backing up the remaining databases the
next day, Wednesday at 1 AM for one hour. If not all databases have been backed up in
that time, it tries again the next day at the same time. This continues until all databases
have been successfully backed up.
After it reaches Tuesday again, Automated Backup begins backing up all databases
again.
This scenario shows that Automated Backup only operates within the specified time
window, and each database is backed up once per week. This also shows that it's
possible for backups to span multiple days in the case where it isn't possible to
complete all backups in a single day.
This means that the next available backup window is Monday at 10 PM for 6 hours. At
that time, Automated Backup begins backing up your databases one at a time.
Then, on Tuesday at 10 for 6 hours, full backups of all databases start again.
) Important
Backups happen sequentially during each interval. For instances with a large
number of databases, schedule your backup interval with enough time to
accommodate all backups. If backups cannot complete within the given interval,
some backups may be skipped, and your time between backups for a single
database may be higher than the configured backup interval time, which could
negatively impact your restore point objective (RPO).
Configure new VMs
Use the Azure portal to configure Automated Backup when you create a new SQL Server
2016 or later machine in the Resource Manager deployment model.
In the SQL Server settings tab, select Enable under Automated Backup.
When you
enable Automated Backup, you can configure the following settings:
To encrypt the backup, select Enable. Then specify the Password. Azure creates a
certificate to encrypt the backups and uses the specified password to protect that
certificate.
Choose Select Storage Container to specify the container where you want to store your
backups.
By default the schedule is set automatically, but you can create your own schedule by
selecting Manual, which allows you to configure the backup frequency, backup time
window, and the log backup frequency in minutes.
The following Azure portal screenshot shows the Automated Backup settings when you
create a new SQL Server VM:
Configure existing VMs
For existing SQL Server virtual machines, go to the SQL virtual machines resource and
then select Backups to configure your Automated Backups.
You can configure the retention period (up to 90 days), the container for the storage
account where you want to store your backups, as well as the encryption, and the
backup schedule. By default, the schedule is automated.
If you want to set your own backup schedule, choose Manual and configure the backup
frequency, whether or not you want system databases backed up, and the transaction
log backup interval in minutes.
When finished, select the Apply button on the bottom of the Backups settings page to
save your changes.
If you're enabling Automated Backup for the first time, Azure configures the SQL Server
IaaS Agent in the background. During this time, the Azure portal might not show that
Automated Backup is configured. Wait several minutes for the agent to be installed,
configured. After that, the Azure portal will reflect the new settings.
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
$vmname = "vmname"
$resourcegroupname = "resourcegroupname"
If the SQL Server IaaS Agent extension is installed, you should see it listed as
"SqlIaaSAgent" or "SQLIaaSExtension." ProvisioningState for the extension should also
show "Succeeded."
If it isn't installed or it has failed to be provisioned, you can install it with the following
command. In addition to the VM name and resource group, you must also specify the
region ($region) that your VM is located in.
PowerShell
$region = "EASTUS2"
PowerShell
Enable : True
EnableEncryption : False
RetentionPeriod : 30
StorageUrl : https://test.blob.core.windows.net/
StorageAccessKey :
Password :
BackupSystemDbs : False
BackupScheduleType : Manual
FullBackupFrequency : WEEKLY
FullBackupStartTime : 2
FullBackupWindowHours : 2
LogBackupFrequency : 60
If your output shows that Enable is set to False, then you have to enable Automated
Backup. The good news is that you enable and configure Automated Backup in the
same way. See the next section for this information.
7 Note
If you check the settings immediately after making a change, it is possible that you
will get back the old configuration values. Wait a few minutes and check the
settings again to make sure that your changes were applied.
First, select, or create a storage account for the backup files. The following script selects
a storage account or creates it if it doesn't exist.
PowerShell
$storage_accountname = "yourstorageaccount"
$storage_resourcegroupname = $resourcegroupname
If (-Not $storage)
7 Note
Automated Backup does not support storing backups in premium storage, but it
can take backups from VM disks which use Premium Storage.
PowerShell
-FullBackupStartHour 20 -FullBackupWindowInHours 2 `
-LogBackupFrequencyInMinutes 30
It could take several minutes to install and configure the SQL Server IaaS Agent.
PowerShell
$password = "P@ssw0rd"
-FullBackupStartHour 20 -FullBackupWindowInHours 2 `
-LogBackupFrequencyInMinutes 30
To confirm your settings are applied, verify the Automated Backup configuration.
PowerShell
Example script
The following script provides a set of variables that you can customize to enable and
configure Automated Backup for your VM. In your case, you might need to customize
the script based on your requirements. For example, you would have to make changes if
you wanted to disable the backup of system databases or enable encryption.
PowerShell
$vmname = "yourvmname"
$resourcegroupname = "vmresourcegroupname"
$storage_accountname = "storageaccountname"
$storage_resourcegroupname = $resourcegroupname
$retentionperiod = 10
$backupscheduletype = "Manual"
$fullbackupfrequency = "Weekly"
$fullbackupstarthour = "20"
$fullbackupwindow = "2"
$logbackupfrequency = "30"
If (-Not $storage)
-LogBackupFrequencyInMinutes $logbackupfrequency
Monitoring
To monitor Automated Backup on SQL Server 2016 and later, you have two main
options. Because Automated Backup uses the SQL Server Managed Backup feature, the
same monitoring techniques apply to both.
Another option is to take advantage of the built-in Database Mail feature for
notifications.
1. Call the msdb.managed_backup.sp_set_parameter stored procedure to assign an
email address to the SSMBackup2WANotificationEmailIds parameter.
2. Enable SendGrid to send the emails from the Azure VM.
3. Use the SMTP server and user name to configure Database Mail. You can configure
Database Mail in SQL Server Management Studio or with Transact-SQL commands.
For more information, see Database Mail.
4. Configure SQL Server Agent to use Database Mail.
5. Verify that the SMTP port is allowed both through the local VM firewall and the
network security group for the VM.
Known issues
Consider these known issues when working with the Automated Backup feature.
Symptom Solution
Enabling Repair the SQL IaaS Agent extension if it's in a failed state.
Automated
Backups will fail if
your IaaS
extension is in a
failed state
Enabling This is a known limitation with the SQL IaaS Agent extension. To work
Automated around this issue, you can enable Managed Backup directly instead of using
Backup fails if you the SQL IaaS Agent extension to configure Automated Backup.
have hundreds of
databases
Enabling Stop the SQL IaaS Agent service. Run the T-SQL command: use msdb exec
Automated autoadmin_metadata_delete . Start the SQL IaaS Agent service and try to re-
Backup fails due to enable Automated Backup from Azure portal.
metadata issues
Enabling Back ups using private endpoints are unsupported. Use the full storage
Automated account URI for your backup.
Backups for FCI
Symptom Solution
Backup Multiple Automated Backup currently only supports one SQL Server instance. If you
SQL instances have multiple named instances, and the default instance, Automated
using Automated Backup works with the default instance. If you have multiple named
Backup instances and no default instance, turning on Automated Backup will fail.
Automated Allow Blob Public Access is enabled on the storage Account. This provides
Backup fails for a temporary workaround to a known issue.
SQL 2016 +
Symptom Solution
Automated/Managed Backup fails due to Check that the Network Security Group (NSG)
connectivity to storage account/Timeout errors for the virtual network, and the Windows
Firewall aren't blocking outbound
connections from the virtual machine (VM) to
the storage account on port 443.
Automated/Managed Backup fails due to See if you can increase the Max Server
Memory/IO Pressure memory and/or resize the disk/VM if you're
running out of IO/VM limits . If you're using
an availability group, consider offloading your
backups to the secondary replica.
Automated Backup fails after Server Rename If you've renamed your machine hostname,
you need to also rename the hostname inside
SQL Server.
Error: The operation failed because of an This is likely caused by the SQL Server Agent
internal error. The argument must not be empty service not having correct impersonation
string.\r\nParameter name: sas Token Please permissions. Change the SQL Server Agent
retry later service to use a different account to fix this
issue.
Symptom Solution
Error: SQL Server Managed Backup to Microsoft You may see this error if you have a large
Azure cannot configure the default backup number of databases. Use Managed backup
settings for the SQLServer instance because the instead of Automated Backup.
container URL was invalid. It is also possible
that your SAS credential is invalid
Automated Backup job failed after VM Restart Check that the SQL Agent service is up and
running.
Managed backup fails This is a known issue fixed in CU18 for SQL
intermittently/Error:Execution timeout Expired Server 2019 and [KB4040376] for SQL Server
2014-2017.
Error: The remote server returned an error: Repair the SQL IaaS Agent extension.
(403) Forbidden
Error 3202: Write on Storage account failed 13 Remove the immutable blob policy on the
(The data is invalid) storage container and make sure the storage
account is using, at minimum, TLS 1.0.
Symptom Solution
Disabling Auto backups Repair the SQL IaaS Agent extension if it's in a failed state.
will fail if your IaaS
extension is in a failed
state
Disabling Automated Stop the SQL IaaS Agent service. Run the T-SQL command: use msdb
Backup fails due to exec autoadmin_metadata_delete . Start SQL Iaas Agent service and try
metadata issues to disable Automated Backup from Azure portal.
Next steps
Automated Backup configures Managed Backup on Azure VMs. So it's important to
review the documentation for Managed Backup to understand the behavior and
implications.
You can find additional backup and restore guidance for SQL Server on Azure VMs in
the following article: Backup and restore for SQL Server on Azure virtual machines.
For information about other available automation tasks, see SQL Server IaaS Agent
Extension.
For more information about running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines overview.
Automated Backup for SQL Server 2014
virtual machines (Resource Manager)
Article • 06/27/2023
Applies to:
SQL Server on Azure VM
Automated Backup automatically configures Managed Backup to Microsoft Azure for all
existing and new databases on an Azure VM running SQL Server 2014 Standard or
Enterprise. This enables you to configure regular database backups that utilize durable
Azure Blob storage. Automated Backup depends on the SQL Server infrastructure as a
service (IaaS) Agent Extension.
7 Note
Azure has two different deployment models you can use to create and work with
resources: Azure Resource Manager and classic. This article covers the use of the
Resource Manager deployment model. We recommend the Resource Manager
deployment model for new deployments instead of the classic deployment model.
Prerequisites
To use Automated Backup, consider the following prerequisites:
Operating system:
7 Note
For SQL 2016 and greater, see Automated Backup for SQL Server 2016.
Database configuration:
Target user databases must use the full recovery model. System databases do not
have to use the full recovery model. However, if you require log backups to be
taken for model or msdb , you must use the full recovery model. For more
information about the impact of the full recovery model on backups, see Backup
under the full recovery model.
The SQL Server VM has been registered with the SQL IaaS Agent extension and the
automated backup feature is enabled. Since automated backup relies on the
extension, automated backup is only supported on target databases from the
default instance, or a single named instance. If there is no default instance, and
multiple named instances, the SQL IaaS Agent extension fails and automated
backup won't work.
Settings
The following table describes the options that can be configured for Automated Backup.
The actual configuration steps vary depending on whether you use the Azure portal or
Azure Windows PowerShell commands. Note that Automated backup uses backup
compression by default and you cannot disable it.
Storage Azure storage An Azure storage account to use for storing Automated Backup
Account account files in blob storage. A container is created at this location to store
all backup files. The backup file naming convention includes the
date, time, and machine name.
Password Password text A password for encryption keys. This is only required if encryption
is enabled. In order to restore an encrypted backup, you must
have the correct password and related certificate that was used at
the time the backup was taken.
On the SQL Server settings tab, scroll down to Automated backup and select Enable.
The following Azure portal screenshot shows the SQL Automated Backup settings.
Navigate to the SQL virtual machines resource for your SQL Server 2014 virtual machine
and then select Backups.
When finished, select the Apply button on the bottom of the Backups page to save your
changes.
If you are enabling Automated Backup for the first time, Azure configures the SQL
Server IaaS Agent in the background. During this time, the Azure portal might not show
that Automated Backup is configured. Wait several minutes for the agent to be installed
and configured. After that, the Azure portal will reflect the new settings.
7 Note
You can also configure Automated Backup using a template. For more information,
see Azure quickstart template for Automated Backup .
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
Enable : False
EnableEncryption : False
RetentionPeriod : -1
StorageUrl : NOTSET
StorageAccessKey :
Password :
BackupSystemDbs : False
BackupScheduleType :
FullBackupFrequency :
FullBackupStartTime :
FullBackupWindowHours :
LogBackupFrequency :
If your output shows that Enable is set to False, then you have to enable automated
backup. The good news is that you enable and configure Automated Backup in the
same way. See the next section for this information.
7 Note
If you check the settings immediately after making a change, it is possible that you
will get back the old configuration values. Wait a few minutes and check the
settings again to make sure that your changes were applied.
First, select or create a storage account for the backup files. The following script selects
a storage account or creates it if it does not exist.
PowerShell
$storage_accountname = "yourstorageaccount"
$storage_resourcegroupname = $resourcegroupname
If (-Not $storage)
7 Note
Automated Backup does not support storing backups in premium storage, but it
can take backups from VM disks which use Premium Storage.
PowerShell
-ResourceGroupName $storage_resourcegroupname
It could take several minutes to install and configure the SQL Server IaaS Agent.
7 Note
PowerShell
$password = "P@ssw0rd"
-ResourceGroupName $storage_resourcegroupname
To confirm your settings are applied, verify the Automated Backup configuration.
PowerShell
Example script
The following script provides a set of variables that you can customize to enable and
configure Automated Backup for your VM. In your case, you might need to customize
the script based on your requirements. For example, you would have to make changes if
you wanted to disable the backup of system databases or enable encryption.
PowerShell
$vmname = "yourvmname"
$resourcegroupname = "vmresourcegroupname"
$storage_accountname = "storageaccountname"
$storage_resourcegroupname = $resourcegroupname
$retentionperiod = 10
If (-Not $storage)
-ResourceGroupName $storage_resourcegroupname
Monitoring
To monitor Automated Backup on SQL Server 2014, you have two main options. Because
Automated Backup uses the SQL Server Managed Backup feature, the same monitoring
techniques apply to both.
7 Note
The schema for Managed Backup in SQL Server 2014 is msdb.smart_admin. In SQL
Server 2016 this changed to msdb.managed_backup, and the reference topics use
this newer schema. But for SQL Server 2014, you must continue to use the
smart_admin schema for all Managed Backup objects.
Another option is to take advantage of the built-in Database Mail feature for
notifications.
Next steps
Automated Backup configures Managed Backup on Azure VMs. So it is important to
review the documentation for Managed Backup on SQL Server 2014.
You can find additional backup and restore guidance for SQL Server on Azure VMs in
the following article: Backup and restore for SQL Server on Azure virtual machines.
For information about other available automation tasks, see SQL Server IaaS Agent
Extension.
For more information about running SQL Server on Azure VMs, see SQL Server on Azure
virtual machines overview.
Use the Azure portal to configure a
multiple-subnet availability group
(preview) for SQL Server on Azure VMs
Article • 05/10/2023
Applies to:
SQL Server on Azure VM
Tip
This article describes how to use the Azure portal to configure an availability group
for SQL Server on Azure VMs in multiple subnets by creating:
7 Note
This deployment method is currently in preview. It supports SQL Server 2016 and
later on Windows Server 2016 and later.
Deploying a multiple-subnet availability group through the portal provides an easy end-
to-end experience for users. It configures the virtual machines by following the best
practices for high availability and disaster recovery (HADR).
Although this article uses the Azure portal to configure the availability group
environment, you can also do so manually.
7 Note
It's possible to lift and shift your availability group solution to SQL Server on Azure
VMs by using Azure Migrate. To learn more, see Migrate an availability group.
Prerequisites
To configure an Always On availability group by using the Azure portal, you must have
the following prerequisites:
An Azure subscription
A resource group
A domain user account that has Create Computer Object permissions in the
domain. This user will create the cluster and availability group, and will install
SQL Server.
A domain SQL Server service account to control SQL Server. This should be the
same account for every SQL Server VM that you want to add to the availability
group.
1. In the Azure portal, on the left menu, select Azure SQL. If Azure SQL isn't in the list,
select All services, type Azure SQL in the search box, and select the result.
3. Under SQL virtual machines, select the High availability checkbox. In the Image
box, type the version of SQL Server that you're interested in (such as 2019), and
then choose a SQL Server image (such as Free SQL Server License: SQL 2019
Developer on Windows Server 2019).
After you select the High availability checkbox, the portal displays the supported
SQL Server versions, starting with SQL Server 2016.
4. Select Create.
1. From the dropdown lists, choose the subscription and resource group that contain
your domain controller and where you intend to deploy your availability group.
2. Use the slider to select the number of virtual machines that you want to create for
the availability group. The minimum is 2, and the maximum is 9. The virtual
machine names are pre-populated, but you can edit them by selecting Edit names.
3. For Region, select a region. All VMs will be deployed to the same region.
4. For Availability, select either Availability Zone or Availability Set. For more
information about availability options, see Availability.
6. The Image area displays the chosen SQL Server VM image. Use the dropdown to
change the image to deploy. Select Configure VM generation to choose the VM
generation.
7. Select See all sizes for the size of the virtual machines. All created VMs will be the
same size. For production workloads, see the recommended machine sizes and
configuration in Performance best practices for SQL Server on Azure VMs.
9. Under SQL Server License, you have the option to enable Azure Hybrid Benefit to
bring your own SQL Server license and save on licensing cost. This option is
available only if you're a Software Assurance customer.
Select Yes if you want to enable Azure Hybrid Benefit, and then confirm that you
have Software Assurance by selecting the checkbox. This option is unavailable if
you selected one of the free SQL Server images, such as the developer edition.
10. Select Next: Networking.
1. Select the virtual network from the dropdown list. The list is pre-populated based
on the region and resource group that you previously chose on the Basics tab. The
selected virtual network should contain the domain controller VM.
2. Under NIC network security group, select Basic. Choosing a basic security group
allows you to select inbound ports for the SQL Server VM.
4. Each virtual machine that you create has to be in its own subnet.
Under Create subnets, select Manage subnet configuration to open the Subnets
pane for the virtual network. Then, either create a subnet (+Subnet) for each
virtual machine or validate that a subnet is available for each virtual machine that
you want to create for the availability group.
When you're done, use the X to close the subnet management pane and go back
to the page for availability group deployment.
5. Choose a Public IP SKU type. All machines will use this public IP type.
6. Use the dropdown lists to assign the subnet, public IP address, and listener IP
address to each VM that you're creating. If you're using a Windows Server 2016
image, you also need to assign the cluster IP address.
When you're assigning a subnet to a virtual machine, the listener and cluster boxes
are pre-populated with available IP addresses. Place your cursor in the box if you
want to edit the IP address. Select Create new if you need to create a new IP
address.
7. If you want to delete the newly created public IP address and NIC when you delete
the VM, select the checkbox.
For the deployment to work, all the accounts need to already be present in Active
Directory for the domain controller VM. This deployment process doesn't create any
accounts and will fail if you provide an invalid account. For more information about the
required permissions, review Configure cluster accounts in Active Directory.
1. Under Windows Server Failover Cluster details, provide the name that you want
to use for the failover cluster.
2. From the dropdown list, select the storage account that you want to use for the
cloud witness. If one doesn't exist, select Create a new storage account.
For Domain join user name and Domain join password, enter the credentials
for the account that creates the Windows Server failover cluster name in
Active Directory and joins the VMs to the domain. This account must have
Create Computer Objects permissions.
4. Under SQL Server details, provide the domain-joined account that you want to use
to manage SQL Server on the VMs. You can choose to use the same user that
created the cluster and joined the VMs to the domain by choosing Same as
domain join account. Or you can select Custom and provide different account
details to use with the SQL Server service account.
5. Select Next: Disks.
1. Under OS disk type, select the type of disk that you want for your operating
system. We recommend Premium for production systems, but it isn't available for a
Basic VM. To use a Premium SSD, change the virtual machine size.
4. Under Data storage, choose the location for your data drive, the disk type, and the
number of disks. You can also select the checkbox to store your system databases
on your data drive instead of the local C drive.
5. Under Log storage, you can choose to use the same drive as the data drive for
your transaction log files, or you can select a separate drive from the dropdown
list. You can also choose the name of the drive, the disk type, and the number of
disks.
6. Under TempDb storage, configure your tempdb database settings. Choices include
the location of the database files, the number of files, initial size, and autogrowth
size in megabytes.
Currently, during deployment, the maximum number of tempdb files is eight. But
you can add more files after the SQL Server VM is deployed.
7. Select OK to save your storage configuration settings.
b. Select the role, either Primary or Secondary, for each virtual machine to be
created.
c. Choose the availability group settings that best suit your business needs.
2. Under Security & Networking, select SQL connectivity to access the SQL Server
instance on the VMs. For more information about connectivity options, see
Connectivity.
3. If you require SQL Server authentication, select Enable under SQL Server
Authentication and provide the login name and password. These credentials will
be used across all the VMs that you're deploying. For more information about
authentication options, see Authentication.
4. For Azure Key Vault integration, select Enable if you want to use Azure Key Vault
to store security secrets for encryption. Then, fill in the requested information. To
learn more, see Azure Key Vault integration.
5. Select Change SQL instance settings to modify SQL Server configuration options.
These options include server collation, maximum degree of parallelism (MAXDOP),
minimum and maximum memory, and whether you want to optimize for ad hoc
workloads.
The script will be pre-populated with the values provided in the previous steps. Run the
PowerShell script as a domain user on the Domain Controller virtual machine or on a
domain joined Windows Server VM.
Once the script has been executed and the prerequisites have been validated, then
select the confirmation checkbox.
1. Select Review + Create.
2. On the Review + Create tab, review the summary. Then select Create to create the
SQL Servers, failover cluster, availability group, and listener.
You can monitor the deployment from the Azure portal. The Notifications button at the
top of the screen shows the basic status of the deployment.
After the deployment finishes, you can browse to the SQL virtual machines resource in
the portal. Under Settings, select High Availability to monitor the health of the
availability group. Select the arrow next to the name of your availability group to see a
list of all replicas.
7 Note
Synchronization Health on the High Availability page of the Azure portal will show
Not Healthy until you add databases to your availability group.
Configure a firewall
This deployment creates a firewall rule for the listener on port 5022, but it doesn't
configure a firewall rule for SQL Server VM port 1433. After the virtual machines are
created, you can configure any firewall rules. For more information, see Configure the
firewall.
1. Connect to one of your SQL Server VMs by using your preferred method, such as a
remote desktop connection (RDP). Use a domain account that's a member of the
sysadmin fixed server role on all of the SQL Server instances.
5. Expand Availability Groups, right-click your availability group, and then select Add
Database.
6. Follow the prompts to select the database that you want to add to your availability
group.
After you add databases, you can check your availability group in the Azure portal and
confirm that the status is Healthy.
Modify the availability group
After you deploy your availability group through the portal, all changes to the
availability group need to be done manually. If you want to remove a replica, you can do
so through SQL Server Management Studio or Transact-SQL, and then delete the VM
through the Azure portal. If you want to add a replica, you have to deploy the virtual
machine manually to the resource group, join it to the domain, and add the replica as
you normally would in a traditional on-premises environment.
Remove a cluster
You can remove a cluster by using the latest version of the Azure CLI or PowerShell.
Azure CLI
First, remove all of the SQL Server VMs from the cluster:
Azure CLI
If the SQL Server VMs that you removed were the only VMs in the cluster, then the
cluster will be destroyed. If any other VMs remain in the cluster, those VMs won't be
removed and the cluster won't be destroyed.
Next, remove the cluster metadata from the SQL IaaS Agent extension:
Azure CLI
Troubleshoot
If you run into problems, you can check the deployment history and then review
common errors and their resolutions.
Changes to the cluster and availability group via the portal happen through
deployments. Deployment history can provide more detail if there are problems with
creating or onboarding the cluster, or with creating the availability group.
To view the logs for the deployment and check the deployment history:
If the deployment fails and you want to redeploy by using the portal, you need to
manually cleanup the resources because deployment through the portal isn't
idempotent (repeatable). These clean-up tasks include deleting VMs and removing
entries in Active Directory and/or DNS. However, if you use the Azure portal to create a
template to deploy your availability group, and then use the template for automation,
clean-up of resources isn't necessary because the template is idempotent.
Next steps
After the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.
Applies to:
SQL Server on Azure VM
This article teaches you to migrate your Always On availability group (AG) from a single
subnet to multiple subnets to simplify connecting to your listener in Azure with your
SQL Server on Azure virtual machines (VMs).
Tip
Overview
Customers who are running SQL Server on Azure virtual machines can implement an
Always On availability group (AG) in either a single subnet or multiple subnets (multi-
subnet). A multi-subnet configuration simplifies the availability group environment by
removing the need for an Azure Load Balancer or a Distributed Network Name (DNN) to
route traffic to the listener on the Azure network. While using a multi-subnet approach
is recommended, it requires the connection strings for an application to use
MultiSubnetFailover = true , which might not be possible immediately due to
application-level changes.
If you originally created an availability group in a single subnet and are using an Azure
Load Balancer or DNN for the listener and now want to reduce complexity by moving to
a multi-subnet configuration, you can do so with some manual steps.
Consider the following two ways to migrate your availability group to multiple subnets:
Create a new environment to perform side-by-side testing
Manually move an existing availability group
U Caution
Performing any migration involves some risk, so as always test thoroughly in a non-
production environment before moving to a production environment.
Initially in the new multi-subnet environment, create the listener with a different name
than the existing single subnet environment. A newly named listener in a new availability
group allows for side-by-side testing of the application (testing with both the multi-
subnet and the current load balancer or DNN in place).
Once the multi-subnet environment is thoroughly validated, then you could cut over to
the new infrastructure. Depending on the environment (production, test), use a
maintenance window to complete the change. During the maintenance window, restore
the database to the new primary replica, drop the availability group listener in both
environments, and then recreate the listener in the multi-subnet environment using the
same name as the previous listener, the one used in the application connection string.
1. Create a new subnet for each secondary, as all virtual machines are currently in the
same subnet.
2. Determine the Cluster IP and Listener IP for all servers in the AG. For example, if
you have an availability group with two nodes, you have the following:
3. Add the Cluster IP and Listener IP to the primary replica server. Adding these IP
addresses is an online operation.
4. In the Azure portal, move the secondary server to the new subnet by going to the
virtual machine > Networking > Network Interface > IP Configurations. Moving
the server to a new subnet reboots the secondary replica server.
5. Add the Cluster IP and the Listener IP to the secondary replica server. Adding these
IP addresses is an online operation.
6. At this point, since the IP addresses and subnets are in place, so you can delete the
load balancer.
8. If you're using Windows Server 2019 and later versions, skip this step. If you're
using Windows Server 2016, manually add the cluster IPs to the FCI.
Next steps
Always On availability groups with SQL Server on Azure VMs
Overview of Always On availability groups
HADR settings for SQL Server on Azure VMs
Tutorial: Prerequisites for availability
groups in multiple subnets (SQL Server
on Azure VMs)
Article • 07/10/2023
Tip
In this tutorial, complete the prerequisites for creating an Always On availability group
for SQL Server on Azure Virtual Machines (VMs) in multiple subnets. At the end of this
tutorial, you will have a domain controller on two Azure virtual machines, two SQL
Server VMs in multiple subnets, and a storage account in a single resource group.
Time estimate: This tutorial creates several resources in Azure and may take up to 30
minutes to complete.
The following diagram illustrates the resources you deploy in this tutorial:
Prerequisites
To complete this tutorial, you need the following:
An Azure subscription. You can open a free Azure account or activate Visual
Studio subscriber benefits.
A basic understanding of, and familiarity with, Always On availability groups in SQL
Server.
4. On the Create a resource group page, fill out the values to create the resource
group:
a. Choose the appropriate Azure subscription from the drop-down.
b. Provide a name for your resource group, such as SQL-HA-RG.
c. Choose a region from the drop-down, such as West US 2. Be sure to deploy all
subsequent resources to this location as well.
d. Select Review + create to review your resource parameters, and then select
Create to create your resource group.
To create the virtual network in the Azure portal, follow these steps:
2. Search for virtual network in the Marketplace search box and choose the virtual
network tile from Microsoft. Select Create on the Virtual network page.
3. On the Create virtual network page, enter the following information on the Basics
tab:
a. Under Project details, choose the appropriate Azure Subscription, and the
Resource group you created previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
SQLHAVNET, and choose the same region as your resource group from the
drop-down.
4. On the IP addresses tab, select the default subnet to open the Edit subnet page.
Change the name to DC-subnet to use for the domain controller subnet. Select
Save.
5. Select + Add subnet to add an additional subnet for your first SQL Server VM, and
fill in the following values:
a. Provide a value for the Subnet name, such as SQL-subnet-1.
b. Provide a unique subnet address range within the virtual network address
space. For example, you can iterate the third octet of DC-subnet address range
by 1.
Azure returns you to the portal dashboard and notifies you when the new network
is created.
Create domain controllers
After your network and subnets are ready, create a virtual machine (or two optionally,
for high availability) and configure it as your domain controller.
3. On the Windows Server tile from Microsoft, select the Create drop-down and
choose the Windows Server 2016 Datacenter image.
4. Fill out the values on the Create a virtual machine page to create your domain
controller VM, such as DC-VM-1. Optionally, create an additional VM, such as DC-
VM-2 to provide high availability for the Active Directory Domain Services. Use the
values in the following tablet to create your VM(s):
Field Value
Region The location where you deployed your resource group and virtual
network.
Password Contoso!0000
Subnet DC-subnet
Azure notifies you when your virtual machines are created and ready to use.
1. Go to your resource group in the Azure portal and select the DC-VM-1 machine.
2. On the DC-VM-1 page, select Connect to download an RDP file for remote
desktop access and then open the file.
3. Connect to the RDP session using your configured administrator account
(DomainAdmin) and password (Contoso!0000).
4. Open the Server Manager dashboard (which may open by default) and choose to
Add roles and features.
6. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.
7 Note
Windows warns you that there is no static IP address. If you're testing the
configuration, select Continue. For production scenarios, set the IP address to
static in the Azure portal, or use PowerShell to set the static IP address of the
domain controller machine.
7. Select Next until you reach the Confirmation section. Select the Restart the
destination server automatically if required check box.
8. Select Install.
9. After the features finish installing, return to the Server Manager dashboard.
12. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.
13. In the Active Directory Domain Services Configuration Wizard, use the following
values:
Page Setting
14. Select Next to go through the other pages in the wizard. On the Prerequisites
Check page, verify that you see the following message: All prerequisite checks
passed successfully. You can review any applicable warning messages, but it's
possible to continue with the installation.
To identify the private IP address of the VM in the Azure portal, follow these steps:
1. Go to your resource group in the Azure portal and select the primary domain
controller, DC-VM-1.
2. On the DC-VM-1 page, choose Networking in the Settings pane.
3. Note the NIC Private IP address. Use this IP address as the DNS server for the
other virtual machines. In the example image, the private IP address is 10.38.0.4.
1. Go to your resource group in the Azure portal , and select your virtual network,
such as SQLHAVNET.
2. Select DNS servers under the Settings pane and then select Custom.
3. Enter the private IP address you identified previously in the IP Address field, such
as 10.38.0.4 .
4. Select Save.
Set the preferred DNS server address, join the domain, and then configure the
secondary domain controller.
The preferred DNS server address should not be updated directly within a VM, it should
be edited from the Azure portal, or PowerShell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:
1. Sign-in to the Azure portal .
2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.
3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.
5. Select either:
Inherit from virtual network: Choose this option to inherit the DNS server
setting defined for the virtual network the network interface is assigned to.
This would automatically inherit the primary domain controller as the DNS
server.
Custom: You can configure your own DNS server to resolve names across
multiple virtual networks. Enter the IP address of the server you want to use
as a DNS server. The DNS server address you specify is assigned only to this
network interface and overrides any DNS setting for the virtual network the
network interface is assigned to. If you select custom, then input the IP
address of the primary domain controller, such as 10.38.0.4 .
6. Select Save.
7. If using a Custom DNS Server, return to the virtual machine in the Azure portal and
restart the VM.
1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).
4. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.
5. After the features finish installing, return to the Server Manager dashboard.
8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.
12. In Select a domain from the forest, choose your domain and then select OK.
13. In Domain Controller Options, use the default values and set a DSRM password.
7 Note
The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.
14. Select Next until the dialog reaches the Prerequisites check. Then select Install.
After the server finishes the configuration changes, restart the server.
Configure two accounts in total, one installation account and then a service account for
both SQL Server VMs. For example, use the values in the following table for the
accounts:
Install Both Corp\Install Log in to either VM with this account to configure the
cluster and availability group.
SQLSvc Both Corp\SQLSvc Use this account for the SQL Server service on both SQL
Account VM Full domain Description
name
Server VMs.
2. In Server Manager, select Tools, and then select Active Directory Administrative
Center.
4. On the right Tasks pane, select New, and then select User.
5. Enter in the new user account and set a complex password. For non-production
environments, set the user account to never expire.
1. Open the Active Directory Administrative Center from Server Manager, if it's not
open already.
4. Select Extensions, and then select the Advanced button on the Security tab.
5. On the Advanced Security Settings for corp dialog box, select Add.
6. Select Select a principal, search for CORP\Install, and then select OK.
7. Check the boxes next to Read all properties and Create Computer Objects.
8. Select OK, and then select OK again. Close the corp properties window.
Now that you've finished configuring Active Directory and the user objects, you are
ready to create your SQL Server VMs.
However, before creating your SQL Server VMs, consider the following design decisions:
For the virtual machine storage, use Azure Managed Disks. Microsoft recommends
Managed Disks for SQL Server virtual machines as they handle storage behind the
scenes. For more information, see Azure Managed Disks Overview.
For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to the virtual machine over the internet and makes
configuration steps easier. In production environments, Microsoft recommends only
private IP addresses in order to reduce the vulnerability footprint of the SQL Server
instance VM resource.
Use a single NIC per server (cluster node). Azure networking has physical redundancy,
which makes additional NICs unnecessary on a failover cluster deployed to an Azure
virtual machine. The cluster validation report warns you that the nodes are reachable
only on a single network. You can ignore this warning when your failover cluster is on
Azure virtual machines.
2. Search for Azure SQL and select the Azure SQL tile from Microsoft.
3. On the Azure SQL page, select Create and then choose the SQL Server 2016 SP2
Enterprise on Windows Server 2016 image from the drop-down.
Use the following table to fill out the values on the Create a virtual machine page to
create both SQL Server VMs, such as SQL-VM-1 and SQL-VM-2 (your IP addresses may
differ from the examples in the table):
Gallery image SQL Server 2016 SP2 Enterprise on SQL Server 2016 SP2 Enterprise on
Windows Server 2016 Windows Server 2016
SQL Server SQL connectivity = Private (within SQL connectivity = Private (within
settings Virtual Network) Virtual Network)
Port = 1433 Port = 1433
SQL Authentication = Disable SQL Authentication = Disable
Azure Key Vault integration = Azure Key Vault integration =
Disable Disable
Storage optimization = Transactional Storage optimization = Transactional
processing processing
SQL Data = 1024 GiB, 5000 IOPS, 200 SQL Data = 1024 GiB, 5000 IOPS, 200
MB/s MB/s
SQL Log = 1024 GiB, 5000 IOPS, 200 SQL Log = 1024 GiB, 5000 IOPS, 200
MB/s MB/s
SQL TempDb = Use local SSD drive SQL TempDb = Use local SSD drive
Automated patching = Sunday at Automated patching = Sunday at
2:00 2:00
Automated backup = Disable Automated backup = Disable
7 Note
These suggested machine sizes are only intended for testing availability groups in
Azure Virtual Machines. For optimized production workloads, see the size
recommendations in Performance best practices for SQL Server on Azure VMs.
If you're on Windows Server 2016 and prior, follow the steps in this section to assign a
secondary IP address to each SQL Server VM for both the availability group listener, and
the cluster.
If you're on Windows Server 2019 or later, only assign a secondary IP address for the
availability group listener, and skip the steps to assign a windows cluster IP, unless you
plan to configure your cluster with a virtual network name (VNN), in which case assign
both IP addresses to each SQL Server VM as you would for Windows Server 2016.
1. Go to your resource group in the Azure portal and select the first SQL Server
VM, such as SQL-VM-1.
2. Select Networking in the Settings pane, and then select the Network Interface:
3. On the Network Interface page, select IP configurations in the Settings pane and
then choose + Add to add an additional IP address:
4. On the Add IP configuration page, do the following:
a. Specify the Name as the Windows Cluster IP, such as windows-cluster-ip for
Windows 2016 and earlier. Skip this step if you're on Windows Server 2019 or
later.
b. Set the Allocation to Static.
c. Enter an unused IP address in the same subnet (SQL-subnet-1) as the SQL
Server VM (SQL-VM-1), such as 10.38.1.10 .
d. Leave the Public IP address at the default of Disassociate.
e. Select OK to finish adding the IP configuration.
5. Select + Add again to configure an additional IP address for the availability group
listener (with a name such as availability-group-listener), again specifying an
unused IP address in SQL-subnet-1 such as 10.38.1.11 :
6. Repeat these steps again for the second SQL Server VM, such as SQL-VM-2. Assign
two unused secondary IP addresses within SQL-subnet-2. Use the values from the
following table to add the IP configuration:
To join the corp.contoso.com domain, follow the same steps for the SQL Server VM as
you did when you joined the domain with the secondary domain controller.
Wait for each SQL Server VM to restart, and then you can add your accounts.
Add accounts
Add the installation account as an administrator on each VM, grant permission to the
installation account and local accounts within SQL Server, and update the SQL Server
service account.
Tip
Be sure you sign in with the domain administrator account. In previous steps, you
were using the BUILTIN administrator account. Now that the server is part of the
domain, use the domain account. In your RDP session, specify DOMAIN\username,
such as CORP\DomainAdmin.
1. Wait until the VM is restarted, then launch the RDP file again from the first SQL
Server VM to sign in to SQL-VM-1 by using the CORP\DomainAdmin account.
2. In Server Manager, select Tools, and then select Computer Management.
3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.
4. Double-click the Administrators group.
5. In the Administrators Properties dialog, select the Add button.
6. Enter the user CORP\Install, and then select OK.
7. Select OK to close the Administrator Properties dialog.
8. Repeat these steps on SQL-VM-2.
1. Connect to the server through the Remote Desktop Protocol (RDP) by using the
<MachineName>\DomainAdmin account, such as SQL-VM-1\DomainAdmin .
2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.
3. In Object Explorer, select Security.
4. Right-click Logins. Select New Login.
5. In Login - New, select Search.
6. Select Locations.
7. Enter the domain administrator network credentials.
8. Use the installation account (CORP\install).
9. Set the sign-in to be a member of the sysadmin fixed server role.
10. Select OK.
11. Repeat these steps on the second SQL Server VM, such as SQL-VM-2, connecting
with the relevant machine name account, such as SQL-VM-2\DomainAdmin .
To add the [NT AUTHORITY\SYSTEM] and grant appropriate permissions, follow these
steps:
1. Connect to the first SQL Server VM through the Remote Desktop Protocol (RDP) by
using the <MachineName>\DomainAdmin account, such as SQL-VM-1\DomainAdmin .
2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.
SQL
USE [master]
GO
CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE=
[master]
GO
CONNECT SQL
VIEW SERVER STATE
5. Repeat these steps on the second SQL Server VM, such as SQL-VM-2, connecting
with the relevant machine name account, such as SQL-VM-2\DomainAdmin .
1. Connect to the first SQL Server VM through the Remote Desktop Protocol (RDP) by
using the <MachineName>\DomainAdmin account, such as SQL-VM-1\DomainAdmin .
2. Open SQL Server Configuration Manager.
3. Right-click the SQL Server service, and then select Properties.
4. Provide the account (Corp\SQLSvc) and password.
5. Select Apply to commit your change and restart the SQL Server service.
6. Repeat these steps on the other SQL Server VM (SQL-VM-1), signing in with the
machine domain account, such as SQL-VM-2\DomainAdmin , and providing the service
account (Corp\SQLSvc).
1. In the portal, open the SQL-HA-RG resource group and select + Create
3. Select Storage account and select Create, configuring it with the following values:
a. Select your subscription and select the resource group SQL-HA-RG.
b. Enter a Storage Account Name for your storage account. Storage account
names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only. The storage account name must also be
unique within Azure.
c. Select your Region.
d. For Performance, select Standard: Recommended for most scenarios (general-
purpose v2 account). Azure Premium Storage is not supported for a cloud
witness.
e. For Redundancy, select Locally redundant storage (LRS). Failover Clustering
uses the blob file as the arbitration point, which requires some consistency
guarantees when reading the data. Therefore you must select Locally redundant
storage for the Replication type.
f. Select Review + create
SQL Server VM: Port 1433 for a default instance of SQL Server.
Database mirroring endpoint: Any available port. Examples frequently use 5022.
Open these firewall ports on both SQL Server VMs. The method of opening the ports
depends on the firewall solution that you use, and may vary from the Windows Firewall
example provided in this section.
1. On the first SQL Server Start screen, launch Windows Firewall with Advanced
Security.
2. On the left pane, select Inbound Rules. On the right pane, select New Rule.
4. For the port, specify TCP and type the appropriate port numbers. See the following
example:
5. Select Next.
6. On the Action page, select Allow the connection , and then select Next.
7. On the Profile page, accept the default settings, and then select Next.
8. On the Name page, specify a rule name (such as SQL Inbound) in the Name text
box, and then select Finish.
Next steps
Now that you've configured the prerequisites, get started with configuring your
availability group in multiple subnets.
Tip
This tutorial shows how to create an Always On availability group for SQL Server on
Azure Virtual Machines (VMs) within multiple subnets. The complete tutorial creates a
Windows Server Failover Cluster, and an availability group with a two SQL Server replicas
and a listener.
Time estimate: Assuming your prerequisites are complete, this tutorial should take
about 30 minutes to complete.
Prerequisites
The following table lists the prerequisites that you need to complete before starting this
tutorial:
Requirement Description
Two SQL Server - Each VM in two different Azure availability zones or the same
instances availability set
- In separate subnets within an Azure Virtual Network
- With two secondary IPs assigned to each VM
- In a single domain
SQL Server service A domain account used by the SQL Server service for each machine
account
Requirement Description
The tutorial assumes you have a basic understanding of SQL Server Always On
availability groups.
1. Connect to the SQL Server virtual machine through the Remote Desktop Protocol
(RDP) using a domain account that has permissions to create objects in AD, such
as the CORP\Install domain account created in the prerequisites article.
Create cluster
After the cluster feature has been added to each SQL Server VM, you're ready to create
the Windows Server Failover Cluster.
1. Use Remote Desktop Protocol (RDP) to connect to the first SQL Server VM (such as
SQL-VM-1) using a domain account that has permissions to create objects in AD,
such as the CORP\Install domain account created in the prerequisites article.
2. In the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.
3. In the left pane, right-click Failover Cluster Manager, and then select Create a
Cluster.
4. In the Create Cluster Wizard, create a two-node cluster by stepping through the
pages using the settings provided in the following table:
Page Settings
Select Servers Type the first SQL Server name (such as SQL-VM-1) in Enter server
name and select Add.
Page Settings
Validation Warning Select Yes. When I click Next, run configuration validation tests,
and then return to the process of creating the cluster.
Access Point for Type a cluster name, for example SQLAGCluster1 in Cluster Name.
Administering the
Cluster
Confirmation Uncheck Add all eligible storage to the cluster and select Next.
2 Warning
If you do not uncheck Add all eligible storage to the cluster, Windows
detaches the virtual disks during the clustering process. As a result, they don't
appear in Disk Manager or Explorer until the storage is removed from the
cluster and reattached using PowerShell.
During the prerequisites, you should have assigned secondary IP addresses to each SQL
Server VM, as the example table below (your specific IP addresses may vary):
VM Subnet Subnet address Secondary IP Secondary IP
Name name range name address
Assign these IP addresses as the cluster IP addresses for each relevant subnet.
7 Note
On Windows Server 2019, the cluster creates a Distributed Server Name instead of
the Cluster Network Name, and the cluster name object (CNO) is automatically
registered with the IP addresses for all of the nodes in the cluster, eliminating the
need for a dedicated windows cluster IP address. If you're on Windows Server 2019,
either skip this section, and any other steps that refer to the Cluster Core
Resources or create a virtual network name (VNN)-based cluster using PowerShell.
See the blog Failover Cluster: Cluster Network Object for more information.
1. In Failover Cluster Manager, scroll down to Cluster Core Resources and expand
the cluster details. You should see the Name and two IP Address resources from
each subnet in the Failed state.
2. Right-click the first failed IP Address resource, and then select Properties.
3. Select Static IP Address and update the IP address to the dedicated windows
cluster IP address in the subnet you assigned to the first SQL Server VM (such as
SQL-VM-1). Select OK.
4. Repeat the steps for the second failed IP Address resource, using the dedicated
windows cluster IP address for the subnet of the second SQL Server VM (such as
SQL-VM-2).
5. In the Cluster Core Resources section, right-click cluster name and select Bring
Online. Wait until the name and one of the IP address resources are online.
Since the SQL Server VMs are in different subnets the cluster will have an OR
dependency on the two dedicated windows cluster IP addresses. When the cluster name
resource comes online, it updates the domain controller (DC) server with a new Active
Directory (AD) computer account. If the cluster core resources move nodes, one IP
address goes offline, while the other comes online, updating the DC server with the new
IP address association.
Tip
When running the cluster on Azure VMs in a production environment, change the
cluster settings to a more relaxed monitoring state to improve cluster stability and
reliability in a cloud environment. To learn more, see SQL Server VM - HADR
configuration best practices.
Configure quorum
On a two node cluster, a quorum device is necessary for cluster reliability and stability.
On Azure VMs, the cloud witness is the recommended quorum configuration, though
there are other options available. The steps in this section configure a cloud witness for
quorum. Identify the access keys to the storage account and then configure the cloud
witness.
Use the Azure portal to view and copy storage access keys for the Azure Storage
Account created in the prerequisites article.
To view and copy the storage access keys, follow these steps:
1. Go to your resource group in the Azure portal and select the storage account
you created.
3. Run the PowerShell script to set TLS (Transport Layer Security) value for the
connection to 1.2:
PowerShell
[Net.ServicePointManager]::SecurityProtocol =
[Net.SecurityProtocolType]::Tls12
4. Use PowerShell to configure the cloud witness. Replace the values for storage
account name and access key with your specific information:
PowerShell
Enable AG feature
The Always On availability group feature is disabled by default. Use the SQL Server
Configuration Manager to enable the feature on both SQL Server instances.
1. Launch the RDP file to the first SQL Server VM (such as SQL-VM-1) with a domain
account that is a member of sysadmin fixed server role, such as the CORP\Install
domain account created in the prerequisites document
2. From the Start screen of one your SQL Server VMs, launch SQL Server
Configuration Manager.
3. In the browser tree, highlight SQL Server Services, right-click the SQL Server
(MSSQLSERVER) service and select Properties.
4. Select the Always On High Availability tab, then check the box to Enable Always
On availability groups:
Create database
For your database, you can either follow the steps in this section to create a new
database, or restore an AdventureWorks database. You also need to back up the
database to initialize the log chain. Databases that have not been backed up do not
meet the prerequisites for an availability group.
1. Launch the RDP file to the first SQL Server VM (such as SQL-VM-1) with a domain
account that is a member of the sysadmin fixed server role, such as the
CORP\Install domain account created in the prerequisites document.
2. Open SQL Server Management Studio and connect to the SQL Server instance.
3. In Object Explorer, right-click Databases and select New Database.
4. In Database name, type MyDB1.
5. Select the Options page, and choose Full from the Recovery model drop-down, if
it's not full by default. The database must be in full recovery mode to meet the
prerequisites of participating in an availability group.
6. Select OK to close the New Database page and create your new database.
2. Select OK to take a full backup of the database to the default backup location.
1. On the first SQL Server VM in Server Manager, select Tools. Open Computer
Management.
3. Right-click Shares, and select New Share... and then use the Create a Shared
Folder Wizard to create a share.
4. For Folder Path, select Browse and locate or create a path for the database backup
shared folder, such as C:\Backup . Select Next.
5. In Name, Description, and Settings verify the share name and path. Select Next.
8. Check Full Control to grant full access to the share the SQL Server service account
( Corp\SQLSvc ):
9. Select OK.
1. In Object Explorer in SQL Server Management Studio (SSMS) on the first SQL
Server VM (such as SQL-VM-1), right-click Always On High Availability and select
New Availability Group Wizard.
2. On the Introduction page, select Next. In the Specify availability group Name
page, type a name for the availability group in Availability group name, such as
AG1. Select Next.
3. On the Select Databases page, select your database, and then select Next. If your
database does not meet the prerequisites, make sure it's in full recovery mode, and
take a backup:
4. On the Specify Replicas page, select Add Replica.
5. The Connect to Server dialog pops up. Type the name of the second server in
Server name, such as SQL-VM-2. Select Connect.
6. On the Specify Replicas page, check the boxes for Automatic Failover and choose
Synchronous commit for the availability mode from the drop-down:
7. Select the Endpoints tab to confirm the ports used for the database mirroring
endpoint are those you opened in the firewall:
8. Select the Listener tab and choose to Create an availability group listener using
the following values for the listener:
Field Value
9. Select Add to provide the secondary dedicated IP address for the listener for both
SQL Server VMs.
The following table shows the example IP addresses created for the listener from
the prerequisites document (though your specific IP addresses may vary):
VM Subnet Subnet address Secondary IP name Secondary IP
Name name range address
10. Choose the first subnet (such as 10.38.1.0/24) from the drop-down on the Add IP
address dialog box and then provide the secondary dedicated listener IPv4
address, such as 10.38.1.11 . Select OK.
11. Repeat this step again, but choose the other subnet from the drop-down (such as
10.38.2.0/24), and provide the secondary dedicated listener IPv4 address from the
other SQL Server VM, such as 10.38.2.11 . Select OK.
12. After reviewing the values on the Listener page, select Next:
13. On the Select Initial Data Synchronization page, choose Full database and log
backup and provide the network share location you created previously, such as
\\SQL-VM-1\Backup .
7 Note
Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, full
synchronization is not recommended because it may take a long time. You
can reduce this time by manually taking a backup of the database and
restoring it with NO RECOVERY . If the database is already restored with NO
RECOVERY on the second SQL Server before configuring the availability group,
choose Join only. If you want to take the backup after configuring the
availability group, choose Skip initial data synchronization.
14. On the Validation page, confirm that all validation checks have passed, and then
choose Next:
15. On the Summary page, select Finish and wait for the wizard to configure your new
availability group. Choose More details on the Progress page to view the detailed
progress. When you see that the wizard completed successfully on the Results
page, inspect the summary to verify the availability group and listener were
created successfully.
The availability group dashboard shows the replica, the failover mode of each
replica, and the synchronization state, such as the following example:
2. Open the Failover Cluster Manager, select your cluster, and choose Roles to view
the availability group role you created within the cluster. Choose the role AG1 and
select the Resources tab to view the listener and the associated IP addresses, such
as the following example:
At this point, you have an availability group with replicas on two instances of SQL Server
and a corresponding availability group listener as well. You can connect using the
listener and you can move the availability group between instances using SQL Server
Management Studio.
2 Warning
Do not try to fail over the availability group by using the Failover Cluster Manager.
All failover operations should be performed from within SQL Server Management
Studio, such as by using the Always On Dashboard or Transact-SQL (T-SQL). For
more information, see Restrictions for using the Failover Cluster Manager with
availability groups.
1. Use RDP to connect to a SQL Server that is in the same virtual network, but does
not own the replica, such as the other SQL Server instance within the cluster, or
any other VM with SQL Server Management Studio installed to it.
2. Open SQL Server Management Studio, and in the Connect to Server dialog box
type the name of the listener (such as AG1-Listener) in Server name:, and then
select Options:
7 Note
While connecting to availability group on different subnets, setting
MultiSubnetFailover=true provides faster detection of and connection to the
Next steps
Now that you've configured your multi-subnet availability group, if needed, you can
extend this across multiple regions.
Applies to:
SQL Server on Azure VM
This tutorial explains how to configure an Always On availability group replica for SQL
Server on Azure Virtual Machines (VMs) in an Azure region that is remote to the primary
replica. You can use this configuration for disaster recovery (DR).
You can also use the steps in this article to extend an existing on-premises availability
group to Azure.
This tutorial builds on the tutorial to manually deploy an availability group in multiple
subnets in a single region. Mentions of the local region in this article refer to the virtual
machines and availability group already configured in the first region. The remote
region is the new infrastructure that's being added in this tutorial.
Overview
The following image shows a common deployment of an availability group on Azure
virtual machines:
In the deployment shown in the diagram, all virtual machines are in one Azure region.
The availability group replicas can have synchronous commit with automatic failover on
SQL-VM-1 and SQL-VM-2. To build this architecture, see the availability group template
or tutorial.
The diagram shows a new virtual machine called SQL-VM-3. SQL-VM-3 is in a different
Azure region. It's added to the Windows Server failover cluster and can host an
availability group replica. In this architecture, the replica in the remote region is normally
configured with asynchronous commit availability mode and manual failover mode.
7 Note
An Azure availability set is required when more than one virtual machine is in the
same region. If only one virtual machine is in the region, the availability set is not
required.
You can place a virtual machine in an availability set only at creation time. If the
virtual machine is already in an availability set, you can add a virtual machine for an
additional replica later.
When availability group replicas are on Azure virtual machines in different Azure
regions, you can connect the virtual networks by using virtual network peering or a site-
to-site VPN gateway.
) Important
This architecture incurs outbound data charges for data replicated between Azure
regions. See Bandwidth pricing .
The following table lists details for the local (current) region and what will be set up in
the new remote region.
To create a virtual network and subnet in the new region in the Azure portal:
2. Search for virtual network in the Marketplace search box, and then select the
virtual network tile from Microsoft.
3. On the Create virtual network page, select Create. Then enter the following
information on the Basics tab:
a. Under Project details, for Subscription, select the appropriate Azure
subscription. For Resource group, select the resource group that you created
previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
remote_HAVNET. Then choose a new remote region.
4. On the IP addresses tab, select the ellipsis (...) next to + Add a subnet. Select
Delete address space to remove the existing address space, if you need a different
address range.
5. Select Add an IP address space to open the pane to create the address space that
you need. This tutorial uses the address space of the remote region: 10.19.0.0/16.
Select Add.
6. Add subnets for the domain controller and the SQL Server.
c. Provide a unique subnet address range within the virtual network address
space.
For example, if your address range is 10.19.0.0/16, enter these values for the DC-
Subnet subnet: 10.19.1.0 for Starting address and /24 for Subnet size.
d. Select Add to add your new subnet.
e. Repeat the process for the SQL-subnet1. When complete, you should have a
subnet for the domain controller in the remote region and a subnet for each
SQL Server in the remote region. For example, in this tutorial, the remote region
virtual network contains:
1. Go to your resource group in the Azure portal , and select your virtual network,
such as remote-HAVNET.
2. Select DNS servers under the Settings pane and then select Custom.
3. Enter the private IP address you identified previously in the IP Address field, such
as 10.38.0.4 .
4. Select Save.
Connect virtual networks with virtual network peering by using the Azure portal
(recommended)
In some cases, you might have to use PowerShell to create the connection
between virtual networks. For example, if you use different Azure accounts, you
can't configure the connection in the portal. In this case, review Configure a
network-to-network connection by using the Azure portal.
This tutorial uses virtual network peering. To configure virtual network peering:
1. In the search box at the top of the Azure portal, type autoHAVNET, which is the
virtual network in your local region. When autoHAVNET appears in the search
results, select it.
Setting Value
This virtual
network
Peering link Enter autoHAVNET-remote_HAVNET for the name of the peering from
name autoHAVNET to the remote virtual network.
Remote
virtual
network
Peering link Enter remote_HAVNET-autoHAVNET for the name of the peering from the
name remote virtual network to autoHAVNET.
Virtual Select remote_HAVNET for the name of the remote virtual network. The
network remote virtual network can be in the same region of autoHAVNET or in a
different region.
The following table shows the settings for the two machines:
Setting Value
Password Contoso!0000
Size DS1_V2
Subnet DC-subnet
Setting Value
Diagnostics Enabled
The preferred DNS server address shouldn't be updated directly within a VM, it should
be edited from the Azure portal, or PowerShell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:
2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.
3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.
5. Since this domain controller isn't in the same virtual network as the primary
domain controller select Custom and input the IP address of the local domain
controller, such as 10.38.0.4 . The DNS server address you specify is assigned only
to this network interface and overrides any DNS setting for the virtual network the
network interface is assigned to.
6. Select Save.
7. Return to the virtual machine in the Azure portal and restart the VM. Once the
virtual machine has restarted, you can join the VM to the domain.
Join the domain
Next, join the corp.contoso.com domain. To do so, follow these steps:
Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:
1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).
5. After the features finish installing, return to the Server Manager dashboard.
8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.
12. In Select a domain from the forest, choose your domain and then select OK.
13. In Domain Controller Options, use the default values and set a DSRM password.
7 Note
The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.
14. Select Next until the dialog reaches the Prerequisites check. Then select Install.
After the server finishes the configuration changes, restart the server.
For the highest level of redundancy, resiliency and availability deploy the VMs within
separate Availability Zones. Availability Zones are unique physical locations within an
Azure region. Each zone is made up of one or more datacenters with independent
power, cooling, and networking. For Azure regions that don't support Availability Zones
yet, use Availability Sets instead. Place all the VMs within the same Availability Set.
For the virtual machine storage, use Azure Managed Disks. Microsoft recommends
Managed Disks for SQL Server virtual machines as they handle storage behind the
scenes. For more information, see Azure Managed Disks Overview.
For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to the virtual machine over the internet and makes
configuration steps easier. In production environments, Microsoft recommends only
private IP addresses in order to reduce the vulnerability footprint of the SQL Server
instance VM resource.
Use a single NIC per server (cluster node). Azure networking has physical redundancy,
which makes additional NICs unnecessary on a failover cluster deployed to an Azure
virtual machine. The cluster validation report will warn you that the nodes are reachable
only on a single network. You can ignore this warning when your failover cluster is on
Azure virtual machines.
Page Setting
Page Setting
Select the appropriate gallery item SQL Server 2016 SP1 Enterprise on Windows Server 2016
Basics
User Name = DomainAdmin
Password = Contoso!0000
Settings
Virtual network = remote-HAVNET
Virtual machine configuration: SQL SQL connectivity = Private (within Virtual Network)
Server settings
Port = 1433
The machine size suggested here is meant for testing availability groups in Azure
virtual machines. For the best performance on production workloads, see the
recommendations for SQL Server machine sizes and configuration in Checklist: Best
practices for SQL Server on Azure VMs.
After the VM is fully provisioned, you need to configure it, join it to the
corp.contoso.com domain, and grant CORP\Install administrative rights to the
machines.
On Windows Server 2016 and earlier, you need to assign an additional secondary IP
address to each SQL Server VM to use for the windows cluster IP since the cluster uses
the Cluster Network Name rather than the default Distributed Network Name (DNN)
introduced in Windows Server 2019. With a DNN, the cluster name object (CNO) is
automatically registered with the IP addresses for all the nodes of the cluster,
eliminating the need for a dedicated windows cluster IP address.
If you're on Windows Server 2016 and prior, follow the steps in this section to assign a
secondary IP address to each SQL Server VM for both the availability group listener, and
the cluster.
) Important
If you're on Windows Server 2019 or later, only assign a secondary IP address for
the availability group listener, and skip the steps to assign a windows cluster IP,
unless you plan to configure your cluster with a virtual network name (VNN), in
which case assign both IP addresses to each SQL Server VM as you would for
Windows Server 2016.
1. Go to your resource group in the Azure portal and select the SQL Server VM,
SQL-VM-3.
2. Select Networking in the Settings pane, and then select the Network Interface.
3. On the Network Interface page, select IP configurations in the Settings pane and
then choose + Add to add an additional IP address.
5. Select + Add again to configure an additional IP address for the availability group
listener (with a name such as availability-group-listener), again specifying an
unused IP address in SQL-subnet-1 such as 10.19.1.11 .
Add accounts
The next task is to add the installation account as an administrator on the SQL Server
VM, and then grant permission to that account and to local accounts within SQL Server.
You can then update the SQL Server service account.
1. Wait until the VM is restarted, and then open the RDP file again from the primary
domain controller. Sign in to SQL-VM-3 by using the CORP\DomainAdmin
account.
Tip
In earlier steps, you were using the BUILTIN administrator account. Now that
the server is in the domain, make sure that you sign in with the domain
administrator account. In your RDP session, specify DOMAIN\username.
2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.
6. Select Locations.
7. Enter the domain administrator's network credentials. Use the installation account
(CORP\Install).
9. Select OK.
SQL
USE [master]
GO
GO
CONNECT SQL
VIEW SERVER STATE
SQL
GO
GO
GO
For SQL Server availability groups, each SQL Server VM needs to run as a domain
account.
6. Select Install.
7 Note
You can now automate this task, along with actually joining the SQL Server VMs to
the failover cluster, by using the Azure CLI and Azure quickstart templates.
Open these firewall ports on both SQL Server VMs. The method of opening the ports
depends on the firewall solution that you use, and may vary from the Windows Firewall
example provided in this section.
1. On the first SQL Server Start screen, launch Windows Firewall with Advanced
Security.
2. On the left pane, select Inbound Rules. On the right pane, select New Rule.
4. For the port, specify TCP and type the appropriate port numbers. See the following
example:
5. Select Next.
6. On the Action page, select Allow the connection , and then select Next.
7. On the Profile page, accept the default settings, and then select Next.
8. On the Name page, specify a rule name (such as SQL Inbound) in the Name text
box, and then select Finish.
1. Use RDP to connect to a SQL Server VM in the existing cluster. Use a domain
account that's an administrator on both SQL Server VMs and the witness server.
2. On the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.
3. On the left pane, right-click Failover Cluster Manager, and then select Connect to
Cluster.
4. In the Select Cluster window, under Cluster name, choose <Cluster on this
server>. Then select OK.
5. In the browser tree, right-click the cluster and select Add Node.
7. On the Select Servers page, add the name of the new SQL Server instance. Enter
the server name in Enter server name, select Add, and then select Next.
8. On the Validation Warning page, select No. (In a production scenario, you should
perform the validation tests). Then, select Next.
9. On the Confirmation page, if you're using Storage Spaces, clear the Add all
eligible storage to the cluster checkbox.
2 Warning
If you don't clear Add all eligible storage to the cluster, Windows detaches
the virtual disks during the clustering process. As a result, they don't appear in
Disk Manager or Explorer until the storage is removed from the cluster and
reattached via PowerShell.
10. Select Next.
Failover Cluster Manager shows that your cluster has a new node and lists it in the
Nodes container.
7 Note
On Windows Server 2019, the cluster creates a distributed server name instead of a
cluster network name. If you're using Windows Server 2019, skip to Add an IP
address for the availability group listener. You can create a cluster network name
by using PowerShell. For more information, review the blog post Failover Cluster:
Cluster Network Object .
Next, create the IP address resource and add it to the cluster for the new SQL Server VM:
1. In Failover Cluster Manager, select the name of the cluster. Right-click the cluster
name under Cluster Core Resources, and then select Properties:
2. In the Cluster Properties dialog, select Add under IP Addresses, and then add the
IP address of the cluster name from the remote network region. Select OK in the IP
Address dialog, and then select OK in the Cluster Properties dialog to save the
new IP address.
Open the Cluster Properties dialog once more, and select the Dependencies tab.
Configure an OR dependency for the two IP addresses.
1. In Failover Cluster Manager, right-click the availability group role. Point to Add
Resource, point to More Resources, and then select IP Address.
2. To configure this IP address, right-click the resource under Other Resources, and
then select Properties.
3. For Name, enter a name for the new resource. For Network, select the network
from the remote datacenter. Select Static IP Address, and then in the Address box,
assign the static IP address that you previously selected for the listener, in this
tutorial is it 10.19.1.11.
5. Add the IP address resource as a dependency for the listener client access point
(network name) cluster.
Right-click the listener client access point, and then select Properties. Browse to
the Dependencies tab and add the new IP address resource to the listener client
access point. The following screenshot shows a properly configured IP address
cluster resource:
) Important
The cluster resource group includes both IP addresses. Both IP addresses are
dependencies for the listener client access point. Use the OR operator in the
cluster dependency configuration.
2. In the browser tree, select SQL Server Services. Right-click the SQL Server
(MSSQLSERVER) service, and then select Properties.
3. Select the AlwaysOn High Availability tab, and then select Enable AlwaysOn
Availability Groups.
4. Select Apply. Select OK in the pop-up dialog.
1. Open a remote desktop session to the primary SQL Server instance in the
availability group, and then open SQL Server Management Studio (SSMS).
4. Select Add Replica and connect to the new SQL Server VM.
) Important
A replica in a remote Azure region should be set to asynchronous replication
with manual failover.
5. On the Select Initial Data Synchronization page, select Full and specify a shared
network location. For the location, use the backup share that you created. In the
example, it was \\<First SQL Server>\Backup\. Then select Next.
7 Note
Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, we
don't recommend full synchronization because it might take a long time.
You can reduce this time by manually backing up the database and restoring
it with NO RECOVERY . If the database is already restored with NO RECOVERY on
the second SQL Server instance before you configure the availability group,
select Join only. If you want to take the backup after you configure the
availability group, select Skip initial data synchronization.
6. On the Validation page, select Next. This page should look similar to the following
image:
7 Note
A warning for the listener configuration says you haven't configured an
availability group listener. You can ignore this warning because the listener is
already set up.
7. On the Summary page, select Finish, and then wait while the wizard configures the
new availability group. On the Progress page, you can select More details to view
the detailed progress.
After the wizard finishes the configuration, inspect the Results page to verify that
the availability group is successfully created.
Your availability group dashboard should look similar to the following screenshot, now
with another replica:
The dashboard shows the replicas, the failover mode of each replica, and the
synchronization state.
Check the availability group listener
1. In Object Explorer, expand Always On High Availability, expand Availability
Groups, and then expand Availability Group Listener.
2. Right-click the listener name and select Properties. All IP addresses should now
appear for the listener (one in each region).
1. In Object Explorer, connect to the instance of SQL Server that hosts the primary
replica.
2. Under Always On Availability Groups, right-click your availability group and select
Properties.
3. On the General page, under Availability Replicas, set the secondary replica on the
disaster recovery (DR) site to use Synchronous Commit availability mode and
Automatic failover mode.
If you have a secondary replica in same site as your primary replica for high
availability, set this replica to Asynchronous Commit and Manual.
4. Select OK.
5. In Object Explorer, right-click the availability group and select Show Dashboard.
7. In Object Explorer, right-click the availability group and select Failover. SQL Server
Management Studio opens a wizard to fail over SQL Server.
8. Select Next, and select the SQL Server instance on the DR site. Select Next again.
9. Connect to the SQL Server instance on the DR site, and then select Next.
10. On the Summary page, verify the settings and select Finish.
After you test connectivity, move the primary replica back to your primary datacenter
and set the availability mode back to its normal operating settings. The following table
shows the normal operating settings for the architecture described in this article:
For more information about planned and forced manual failover, see the following
articles:
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
Tip
This article describes how to use PowerShell or the Azure CLI to deploy a Windows
failover cluster, add SQL Server VMs to the cluster, and create the internal load balancer
and listener for an Always On availability group within a single subnet.
Deployment of the availability group is still done manually through SQL Server
Management Studio (SSMS) or Transact-SQL (T-SQL).
While this article uses PowerShell and the Az CLI to configure the availability group
environment, it is also possible to do so from the Azure portal, using Azure Quickstart
templates, or Manually as well.
7 Note
It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs using Azure Migrate. See Migrate availability group to learn more.
Prerequisites
To configure an Always On availability group, you must have the following prerequisites:
An Azure subscription .
A resource group with a domain controller.
One or more domain-joined VMs in Azure running SQL Server 2016 (or later)
Enterprise edition in the same availability set or different availability zones that
have been registered with the SQL IaaS Agent extension.
The latest version of PowerShell or the Azure CLI.
Two available (not used by any entity) IP addresses. One is for the internal load
balancer. The other is for the availability group listener within the same subnet as
the availability group. If you're using an existing load balancer, you only need one
available IP address for the availability group listener.
Windows Server Core is not a supported operating system for the PowerShell
commands referenced in this article as there is a dependency on RSAT, which is not
included in Core installations of Windows.
Permissions
You need the following account permissions to configure the Always On availability
group by using the Azure CLI:
An existing domain user account that has Create Computer Object permission in
the domain. For example, a domain admin account typically has sufficient
permission (for example: account@domain.com). This account should also be part
of the local administrator group on each VM to create the cluster.
The domain user account that controls SQL Server.
Azure CLI
Azure CLI
Tip
You might see the error az sql: 'vm' is not in the 'az sql' command group if
you're using an outdated version of the Azure CLI. Download the latest version
of Azure CLI to get past this error.
The following code snippet defines the metadata for the cluster:
Azure CLI
Azure CLI
# --sa-key '4Z4/i1Dn8/bpbseyWX' `
# --storage-account 'https://cloudwitness.blob.core.windows.net/'
--sa-key '<PublicKey>' `
--storage-account '<ex:https://cloudwitness.blob.core.windows.net/>'
The following code snippet creates the cluster and adds the first SQL Server VM to it:
Azure CLI
Azure CLI
# -b Str0ngAzur3P@ssword! -p Str0ngAzur3P@ssword! -s
Str0ngAzur3P@ssword!
# -b Str0ngAzur3P@ssword! -p Str0ngAzur3P@ssword! -s
Str0ngAzur3P@ssword!
Use this command to add any other SQL Server VMs to the cluster. Modify only the
-n parameter for the SQL Server VM name.
Configure quorum
Although the disk witness is the most resilient quorum option, it requires an Azure
shared disk which imposes some limitations to the availability group. As such, the cloud
witness is the recommended quorum solution for clusters hosting availability groups for
SQL Server on Azure VMs.
If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.
Validate cluster
For a failover cluster to be supported by Microsoft, it must pass cluster validation.
Connect to the VM using your preferred method, such as Remote Desktop Protocol
(RDP) and validate that your cluster passes validation before proceeding further. Failure
to do so leaves your cluster in an unsupported state.
You can validate the cluster using Failover Cluster Manager (FCM) or the following
PowerShell command:
PowerShell
) Important
Do not create a listener at this time because this is done through the Azure CLI in
the following sections.
7 Note
The Always On availability group listener requires an internal instance of Azure Load
Balancer. The internal load balancer provides a "floating" IP address for the availability
group listener that allows for faster failover and reconnection. If the SQL Server VMs in
an availability group are part of the same availability set, you can use a Basic load
balancer. Otherwise, you need to use a Standard load balancer.
7 Note
The internal load balancer should be in the same virtual network as the SQL Server
VM instances.
Azure CLI
Azure CLI
) Important
The public IP resource for each SQL Server VM should have a Standard SKU to be
compatible with the Standard load balancer. To determine the SKU of your VM's
public IP resource, go to Resource Group, select your Public IP Address resource
for the desired SQL Server VM, and locate the value under SKU in the Overview
pane.
Create listener
After you manually create the availability group, you can create the listener by using az
sql vm ag-listener.
RG/providers/Microsoft.Network/virtualNetworks/SQLVMvNet
Azure CLI
Azure CLI
# --subnet /subscriptions/a1a1-1a11a/resourceGroups/SQLVM-
RG/providers/Microsoft.Network/virtualNetworks/SQLVMvNet/subnets/default
`
Add a replica
To add a new replica to the availability group:
Azure CLI
Azure CLI
# -b Str0ngAzur3P@ssword! -p Str0ngAzur3P@ssword! -s
Str0ngAzur3P@ssword!
2. Use SQL Server Management Studio to add the SQL Server instance as a
replica within the availability group.
Azure CLI
Remove a replica
To remove a replica from the availability group:
Azure CLI
1. Remove the replica from the availability group by using SQL Server
Management Studio.
2. Remove the SQL Server VM metadata from the listener:
Azure CLI
Azure CLI
Remove listener
If you later need to remove the availability group listener configured with the Azure CLI,
you must go through the SQL IaaS Agent extension. Because the listener is registered
through the SQL IaaS Agent extension, just deleting it via SQL Server Management
Studio is insufficient.
The best method is to delete it through the SQL IaaS Agent extension by using the
following code snippet in the Azure CLI. Doing so removes the availability group listener
metadata from the SQL IaaS Agent extension. It also physically deletes the listener from
the availability group.
Azure CLI
Azure CLI
# Remove the availability group listener
Remove cluster
Remove all of the nodes from the cluster to destroy it, and then remove the cluster
metadata from the SQL IaaS Agent extension. You can do so by using the Azure CLI or
PowerShell.
Azure CLI
First, remove all of the SQL Server VMs from the cluster:
Azure CLI
If these are the only VMs in the cluster, then the cluster will be destroyed. If there
are any other VMs in the cluster apart from the SQL Server VMs that were removed,
the other VMs will not be removed and the cluster will not be destroyed.
Next, remove the cluster metadata from the SQL IaaS Agent extension:
Azure CLI
Next steps
Once the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.
Applies to:
SQL Server on Azure VM
Tip
This article describes how to use the Azure quickstart templates to partially automate
the deployment of an Always On availability group configuration for SQL Server virtual
machines (VMs) within a single subnet in Azure. Two Azure quickstart templates are
used in this process:
Template Description
sql-vm- Creates the Windows failover cluster and joins the SQL Server VMs to it.
ag-
setup
sql-vm- Creates the availability group listener and configures the internal load balancer. This
aglistener- template can be used only if the Windows failover cluster was created with the 101-
setup sql-vm-ag-setup template.
Other parts of the availability group configuration must be done manually, such as
creating the availability group and creating the internal load balancer. This article
provides the sequence of automated and manual steps.
While this article uses the Azure Quickstart templates to configure the availability group
environment, it is also possible to do so using the Azure portal, PowerShell or the Azure
CLI, or Manually as well.
7 Note
It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs using Azure Migrate. See Migrate availability group to learn more.
Prerequisites
To automate the setup of an Always On availability group by using quickstart templates,
you must have the following prerequisites:
An Azure subscription .
A resource group with a domain controller.
One or more domain-joined VMs in Azure running SQL Server 2016 (or later)
Enterprise edition that are in the same availability set or availability zone and that
have been registered with the SQL IaaS Agent extension.
An internal Azure Load Balancer and an available (not used by any entity) IP
address for the availability group listener within the same subnet as the SQL Server
VM.
Permissions
The following permissions are necessary to configure the Always On availability group
by using Azure quickstart templates:
An existing domain user account that has Create Computer Object permission in
the domain. For example, a domain admin account typically has sufficient
permission (for example: account@domain.com). This account should also be part
of the local administrator group on each VM to create the cluster.
The domain user account that controls SQL Server.
Create cluster
After your SQL Server VMs have been registered with the SQL IaaS Agent extension, you
can join your SQL Server VMs to SqlVirtualMachineGroups. This resource defines the
metadata of the Windows failover cluster. Metadata includes the version, edition, fully
qualified domain name, Active Directory accounts to manage both the cluster and SQL
Server, and the storage account as the cloud witness.
Adding SQL Server VMs to the SqlVirtualMachineGroups resource group bootstraps the
Windows Failover Cluster Service to create the cluster and then joins those SQL Server
VMs to that cluster. This step is automated with the 101-sql-vm-ag-setup quickstart
template. You can implement it by using the following steps:
1. Go to the sql-vm-ag-setup quickstart template. Then, select Deploy to Azure to
open the quickstart template in the Azure portal.
2. Fill out the required fields to configure the metadata for the Windows failover
cluster. You can leave the optional fields blank.
The following table shows the necessary values for the template:
Field Value
Resource The resource group where your SQL Server VMs reside.
group
Failover The name that you want for your new Windows failover cluster.
Cluster
Name
Existing Vm The SQL Server VMs that you want to participate in the availability group
List and be part of this new cluster. Separate these values with a comma and a
space (for example: SQLVM1, SQLVM2).
SQL Server The SQL Server version of your SQL Server VMs. Select it from the drop-
Version down list. Currently, only SQL Server 2016 and SQL Server 2017 images are
supported.
Existing The existing FQDN for the domain in which your SQL Server VMs reside.
Fully
Qualified
Domain
Name
Existing An existing domain user account that has Create Computer Object
Domain permission in the domain as the CNO is created during template
Account deployment. For example, a domain admin account typically has sufficient
permission (for example: account@domain.com). This account should also be
part of the local administrator group on each VM to create the cluster.
Domain The password for the previously mentioned domain user account.
Account
Password
Existing Sql The domain user account that controls the SQL Server service during
Service availability group deployment (for example: account@domain.com).
Account
Sql Service The password used by the domain user account that controls SQL Server.
Password
Field Value
Cloud A new Azure storage account that will be created and used for the cloud
Witness witness. You can modify this name.
Name
3. If you agree to the terms and conditions, select the I Agree to the terms and
conditions stated above check box. Then select Purchase to finish deployment of
the quickstart template.
4. To monitor your deployment, either select the deployment from the Notifications
bell icon in the top navigation banner or go to Resource Group in the Azure portal.
Select Deployments under Settings, and choose the Microsoft.Template
deployment.
7 Note
Credentials provided during template deployment are stored only for the length of
the deployment. After deployment finishes, those passwords are removed. You'll be
asked to provide them again if you add more SQL Server VMs to the cluster.
Configure quorum
Although the disk witness is the most resilient quorum option, it requires an Azure
shared disk which imposes some limitations to the availability group. As such, the cloud
witness is the recommended quorum solution for clusters hosting availability groups for
SQL Server on Azure VMs.
If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.
Validate cluster
For a failover cluster to be supported by Microsoft, it must pass cluster validation.
Connect to the VM using your preferred method, such as Remote Desktop Protocol
(RDP) and validate that your cluster passes validation before proceeding further. Failure
to do so leaves your cluster in an unsupported state.
You can validate the cluster using Failover Cluster Manager (FCM) or the following
PowerShell command:
PowerShell
) Important
7 Note
The Always On availability group listener requires an internal instance of Azure Load
Balancer. The internal load balancer provides a "floating" IP address for the availability
group listener that allows for faster failover and reconnection. If the SQL Server VMs in
an availability group are part of the same availability set, you can use a Basic load
balancer. Otherwise, you need to use a Standard load balancer.
) Important
The internal load balancer should be in the same virtual network as the SQL Server
VM instances.
You just need to create the internal load balancer. In step 4, the 101-sql-vm-aglistener-
setup quickstart template handles the rest of the configuration (such as the backend
pool, health probe, and load-balancing rules).
1. In the Azure portal, open the resource group that contains the SQL Server virtual
machines.
3. Search for load balancer. In the search results, select Load Balancer, which is
published by Microsoft.
5. In the Create load balancer dialog box, configure the load balancer as follows:
Setting Value
Name Enter a text name that represents the load balancer. For example, enter
sqlLB.
Type Internal: Most implementations use an internal load balancer, which allows
applications within the same virtual network to connect to the availability
group.
External: Allows applications to connect to the availability group through a
public internet connection.
Virtual Select the virtual network that the SQL Server instances are in.
network
Subnet Select the subnet that the SQL Server instances are in.
IP address Static
assignment
Subscription If you have multiple subscriptions, this field might appear. Select the
subscription that you want to associate with this resource. It's normally the
same subscription as all the resources for the availability group.
Resource Select the resource group that the SQL Server instances are in.
group
Location Select the Azure location that the SQL Server instances are in.
6. Select Create.
) Important
The public IP resource for each SQL Server VM should have a Standard SKU to be
compatible with the Standard load balancer. To determine the SKU of your VM's
public IP resource, go to Resource Group, select your Public IP Address resource
for the SQL Server VM, and locate the value under SKU in the Overview pane.
Create listener
Create the availability group listener and configure the internal load balancer
automatically by using the 101-sql-vm-aglistener-setup quickstart template. The
template provisions the
Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups/AvailabilityGroupListener
resource. The 101-sql-vm-aglistener-setup quickstart template, via the SQL IaaS Agent
extension, does the following actions:
Creates a new frontend IP resource (based on the IP address value provided during
deployment) for the listener.
Configures the network settings for the cluster and the internal load balancer.
Configures the backend pool for the internal load balancer, the health probe, and
the load-balancing rules.
Creates the availability group listener with the given IP address and name.
7 Note
You can use 101-sql-vm-aglistener-setup only if the Windows failover cluster was
created with the 101-sql-vm-ag-setup template.
To configure the internal load balancer and create the availability group listener, do the
following:
2. Fill out the required fields to configure the internal load balancer, and create the
availability group listener. You can leave the optional fields blank.
The following table shows the necessary values for the template:
Field Value
Resource The resource group where your SQL Server VMs and availability group exist.
group
Existing The name of the cluster that your SQL Server VMs are joined to.
Failover
Cluster
Name
Existing The name of the availability group that your SQL Server VMs are a part of.
Sql
Availability
Group
Existing The names of the SQL Server VMs that are part of the previously mentioned
Vm List availability group. Separate the names with a comma and a space (for
example: SQLVM1, SQLVM2).
Listener The DNS name that you want to assign to the listener. By default, this
template specifies the name "aglistener," but you can change it. The name
should not exceed 15 characters.
Listener The port that you want the listener to use. Typically, this port should be the
Port default of 1433. This is the port number that the template specifies. But if your
default port has been changed, the listener port should use that value instead.
Listener IP The IP address that you want the listener to use. This address will be created
during template deployment, so provide one that isn't already in use.
Existing The name of the internal subnet of your SQL Server VMs (for example:
Subnet default). You can determine this value by going to Resource Group, selecting
your virtual network, selecting Subnets in the Settings pane, and copying the
value under Name.
Existing The name of the internal load balancer that you created in step 3.
Internal
Load
Balancer
Probe Port The probe port that you want the internal load balancer to use. The template
uses 59999 by default, but you can change this value.
3. If you agree to the terms and conditions, select the I Agree to the terms and
conditions stated above check box. Select Purchase to finish deployment of the
quickstart template.
4. To monitor your deployment, either select the deployment from the Notifications
bell icon in the top navigation banner or go to Resource Group in the Azure portal.
Select Deployments under Settings, and choose the Microsoft.Template
deployment.
7 Note
If your deployment fails halfway through, you'll need to manually remove the
newly created listener by using PowerShell before you redeploy the 101-sql-vm-
aglistener-setup quickstart template.
Remove listener
If you later need to remove the availability group listener that the template configured,
you must go through the SQL IaaS Agent extension. Because the listener is registered
through the SQL IaaS Agent extension, just deleting it via SQL Server Management
Studio is insufficient.
The best method is to delete it through the SQL IaaS Agent extension by using the
following code snippet in PowerShell. Doing so removes the availability group listener
metadata from the SQL IaaS Agent extension. It also physically deletes the listener from
the availability group.
PowerShell
Remove-AzResource -ResourceId
'/subscriptions/<SubscriptionID>/resourceGroups/<resource-group-
name>/providers/Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups/<cluster
-name>/availabilitygrouplisteners/<listener-name>' -Force
Common errors
This section discusses some known issues and their possible resolution.
To resolve this behavior, remove the listener by using PowerShell, delete the internal
load balancer via the Azure portal, and start again at step 3.
Verify that the account exists. If it does, you might be running into the second situation.
To resolve it, do the following:
1. On the domain controller, open the Active Directory Users and Computers
window from the Tools option in Server Manager.
4. Select the Account tab. If the User logon name box is blank, this is the cause of
your error.
5. Fill in the User logon name box to match the name of the user, and select the
proper domain from the drop-down list.
6. Select Apply to save your changes, and close the dialog box by selecting OK.
After you make these changes, try to deploy the Azure quickstart template once more.
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
This tutorial explains how to configure an Always On availability group replica for SQL
Server on Azure virtual machines (VMs) in an Azure region that is remote to the primary
replica. You can use this configuration for the purpose of disaster recovery (DR).
You can also use the steps in this article to extend an existing on-premises availability
group to Azure.
This tutorial builds on the tutorial to manually deploy an availability group in a single
subnet in a single region. Mentions of the local region in this article refer to the virtual
machines and availability group already configured in the first region. The remote
region is the new infrastructure that's being added in this tutorial.
Overview
The following image shows a common deployment of an availability group on Azure
virtual machines:
In the deployment shown in the diagram, all virtual machines are in one Azure region.
The availability group replicas can have synchronous commit with automatic failover on
SQL-1 and SQL-2. To build this architecture, see the availability group template or
tutorial.
The diagram shows a new virtual machine called SQL-3. SQL-3 is in a different Azure
region. It's added to the Windows Server failover cluster and can host an availability
group replica.
The Azure region for SQL-3 has a new Azure load balancer. In this architecture, the
replica in the remote region is normally configured with asynchronous commit
availability mode and manual failover mode.
7 Note
An Azure availability set is required when more than one virtual machine is in the
same region. If only one virtual machine is in the region, the availability set is not
required.
You can place a virtual machine in an availability set only at creation time. If the
virtual machine is already in an availability set, you can add a virtual machine for an
additional replica later.
When availability group replicas are on Azure virtual machines in different Azure
regions, you can connect the virtual networks by using virtual network peering or a site-
to-site VPN gateway.
) Important
This architecture incurs outbound data charges for data replicated between Azure
regions. See Bandwidth pricing .
The following table lists details for the local (current) region and what will be set up in
the new remote region.
To create a virtual network and subnet in the new region in the Azure portal:
2. Search for virtual network in the Marketplace search box, and then select the
virtual network tile from Microsoft.
3. On the Create virtual network page, select Create. Then enter the following
information on the Basics tab:
a. Under Project details, for Subscription, select the appropriate Azure
subscription. For Resource group, select the resource group that you created
previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
remote_HAVNET. Then choose a new remote region.
4. On the IP addresses tab, select the ellipsis (...) next to + Add a subnet. Select
Delete address space to remove the existing address space, if you need a different
address range.
5. Select Add an IP address space to open the pane to create the address space that
you need. This tutorial uses the address space of the remote region: 10.36.0.0/16.
Select Add.
b. Provide a unique subnet address range within the virtual network address
space.
For example, if your address range is 10.36.0.0/16, enter these values for the
admin subnet: 10.36.1.0 for Starting address and /24 for Subnet size.
Connect the virtual networks in the two Azure
regions
After you create the new virtual network and subnet, you're ready to connect the two
regions so they can communicate with each other. There are two methods to do this:
Connect virtual networks with virtual network peering by using the Azure portal
(recommended)
In some cases, you might have to use PowerShell to create the connection
between virtual networks. For example, if you use different Azure accounts, you
can't configure the connection in the portal. In this case, review Configure a
network-to-network connection by using the Azure portal.
This tutorial uses virtual network peering. To configure virtual network peering:
1. In the search box at the top of the Azure portal, type autoHAVNET, which is the
virtual network in your local region. When autoHAVNET appears in the search
results, select it.
Setting Value
This virtual
network
Peering link Enter autoHAVNET-remote_HAVNET for the name of the peering from
name autoHAVNET to the remote virtual network.
Remote
virtual
network
Peering link Enter remote_HAVNET-autoHAVNET for the name of the peering from the
name remote virtual network to autoHAVNET.
Virtual Select remote_HAVNET for the name of the remote virtual network. The
network remote virtual network can be in the same region of autoHAVNET or in a
different region.
The following table shows the settings for the two machines:
Setting Value
Password Contoso!0000
Size DS1_V2
Subnet admin
Setting Value
Diagnostics Enabled
The preferred DNS server address should not be updated directly within a VM, it should
be edited from the Azure portal, or PowerShell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:
2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.
3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.
5. Since this domain controller is not in the same virtual network as the primary
domain controller select Custom and input the IP address of the primary domain
controller, such as 192.168.15.4 . The DNS server address you specify is assigned
only to this network interface and overrides any DNS setting for the virtual network
the network interface is assigned to.
6. Select Save.
7. Return to the virtual machine in the Azure portal and restart the VM. Once the
virtual machine has restarted, you can join the VM to the domain.
Join the domain
Next, join the corp.contoso.com domain. To do so, follow these steps:
Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:
1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).
5. After the features finish installing, return to the Server Manager dashboard.
8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.
12. In Select a domain from the forest, choose your domain and then select OK.
13. In Domain Controller Options, use the default values and set a DSRM password.
7 Note
The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.
14. Select Next until the dialog reaches the Prerequisites check. Then select Install.
After the server finishes the configuration changes, restart the server.
For the virtual machine storage, use Azure managed disks. We recommend
managed disks for SQL Server virtual machines. Managed disks handle storage
behind the scenes. In addition, when virtual machines with managed disks are in
the same availability set, Azure distributes the storage resources to provide
appropriate redundancy.
For more information, see Introduction to Azure managed disks. For specifics
about managed disks in an availability set, see Use managed disks for VMs in an
availability set.
For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to the virtual machine over the internet and
makes configuration steps easier. In production environments, we recommend
only private IP addresses. Private IP addresses reduce the vulnerability footprint of
the SQL Server VM.
Use a single network interface card (NIC) per server (cluster node) and a single
subnet. Azure networking has physical redundancy, which makes additional NICs
and subnets unnecessary on an Azure VM guest cluster. The cluster validation
report will warn you that the nodes are reachable on only a single network. You
can ignore this warning on Azure VM guest failover clusters.
Page Setting
Select the appropriate gallery item SQL Server 2016 SP1 Enterprise on Windows Server 2016
Page Setting
Basics
User Name = DomainAdmin
Password = Contoso!0000
Settings
Virtual network = remote-HAVNET
Virtual machine configuration: SQL SQL connectivity = Private (within Virtual Network)
Server settings
Port = 1433
The machine size suggested here is meant for testing availability groups in Azure
virtual machines. For the best performance on production workloads, see the
recommendations for SQL Server machine sizes and configuration in Checklist: Best
practices for SQL Server on Azure VMs.
After the VM is fully provisioned, you need to join it to the corp.contoso.com domain
and grant CORP\Install administrative rights to the machines.
Add accounts
The next task is to add the installation account as an administrator on the SQL Server
VM, and then grant permission to that account and to local accounts within SQL Server.
You can then update the SQL Server service account.
Tip
In earlier steps, you were using the BUILTIN administrator account. Now that
the server is in the domain, make sure that you sign in with the domain
administrator account. In your RDP session, specify DOMAIN\username.
3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.
2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.
6. Select Locations.
7. Enter the domain administrator's network credentials. Use the installation account
(CORP\Install).
9. Select OK.
SQL
USE [master]
GO
GO
SQL
GO
GO
GO
For SQL Server availability groups, each SQL Server VM needs to run as a domain
account.
1. In the Azure portal, go to the resource group where your SQL Server instance is,
and then select + Add.
2. Search for Load Balancer. Choose the load balancer that Microsoft publishes.
3. Select Create.
Setting Value
Resource group Use the same resource group as the virtual machine.
Name Use a text name for the load balancer (for example, remoteLB).
9. Select Review + Create to validate the configuration, and then select Create to
create the load balancer and the frontend IP address.
To configure the load balancer, you need to create a backend pool, create a probe, and
set the load-balancing rules.
2. Select the load balancer, select Backend pools, and then select +Add.
5. Select Add to associate the backend pool with the newly created SQL Server VM.
6. Under Virtual machine, choose the virtual machine that will host the availability
group replica.
8. Select Save.
3. Select Add.
Frontend IP Choose an address Use the address that you created when
address you created the load balancer.
Backend pool Choose the backend pool Select the backend pool that contains
the virtual machines targeted for the
load balancer.
2 Warning
Direct server return is set during creation. You can't change it.
3. Select Save.
1. Connect to the SQL Server virtual machine through RDP by using the CORP\Install
account. Open the Server Manager dashboard.
6. Select Install.
7 Note
You can now automate this task, along with actually joining the SQL Server VMs to
the failover cluster, by using the Azure CLI and Azure quickstart templates.
SQL Server VM: Port 1433 for a default instance of SQL Server.
Azure load balancer probe: Any available port. Examples frequently use 59999.
Cluster core load balancer IP address health probe: Any available port. Examples
frequently use 58888.
Database mirroring endpoint: Any available port. Examples frequently use 5022.
The firewall ports need to be open on the new SQL Server VM. The method of opening
the ports depends on the firewall solution that you use. The following steps show how
to open the ports in Windows Firewall:
1. On the SQL Server Start screen, open Windows Firewall with Advanced Security.
2. On the left pane, select Inbound Rules. On the right pane, select New Rule.
4. For the port, specify TCP and enter the appropriate port numbers. The following
screenshot shows an example:
5. Select Next.
6. On the Action page, keep Allow the connection selected and select Next.
7. On the Profile page, accept the default settings and select Next.
8. On the Name page, specify a rule name (such as Azure LB Probe) in the Name
box, and then select Finish.
Add SQL Server to the Windows Server failover
cluster
The new SQL Server VM needs to be added to the Windows Server failover cluster that
exists in your local region.
1. Use RDP to connect to a SQL Server VM in the existing cluster. Use a domain
account that's an administrator on both SQL Server VMs and the witness server.
2. On the Server Manager dashboard, select Tools, and then select Failover Cluster
Manager.
3. On the left pane, right-click Failover Cluster Manager, and then select Connect to
Cluster.
4. In the Select Cluster window, under Cluster name, choose <Cluster on this
server>. Then select OK.
5. In the browser tree, right-click the cluster and select Add Node.
7. On the Select Servers page, add the name of the new SQL Server instance. Enter
the server name in Enter server name, select Add, and then select Next.
8. On the Validation Warning page, select No. (In a production scenario, you should
perform the validation tests). Then, select Next.
9. On the Confirmation page, if you're using Storage Spaces, clear the Add all
eligible storage to the cluster checkbox.
2 Warning
If you don't clear Add all eligible storage to the cluster, Windows detaches
the virtual disks during the clustering process. As a result, they don't appear in
Disk Manager or Explorer until the storage is removed from the cluster and
reattached via PowerShell.
7 Note
On Windows Server 2019, the cluster creates a distributed server name instead of a
cluster network name. If you're using Windows Server 2019, skip to Add an IP
address for the availability group listener. You can create a cluster network name
by using PowerShell. For more information, review the blog post Failover Cluster:
Cluster Network Object .
Next, create the IP address resource and add it to the cluster for the new SQL Server VM:
1. In Failover Cluster Manager, select the name of the cluster. Right-click the cluster
name under Cluster Core Resources, and then select Properties:
2. In the Cluster Properties dialog, select Add under IP Addresses, and then add the
IP address of the cluster name from the remote network region. Select OK in the IP
Address dialog, and then select OK in the Cluster Properties dialog to save the
new IP address.
Open the Cluster Properties dialog once more, and select the Dependencies tab.
Configure an OR dependency for the two IP addresses.
Add an IP address for the availability group listener
The IP address for the listener in the remote region needs to be added to the cluster. To
add the IP address:
1. In Failover Cluster Manager, right-click the availability group role. Point to Add
Resource, point to More Resources, and then select IP Address.
2. To configure this IP address, right-click the resource under Other Resources, and
then select Properties.
3. For Name, enter a name for the new resource. For Network, select the network
from the remote datacenter. Select Static IP Address, and then in the Address box,
assign the static IP address from the new Azure load balancer.
4. Select Apply, and then select OK.
5. Add the IP address resource as a dependency for the listener client access point
(network name) cluster.
Right-click the listener client access point, and then select Properties. Browse to
the Dependencies tab and add the new IP address resource to the listener client
access point. The following screenshot shows a properly configured IP address
cluster resource:
) Important
The cluster resource group includes both IP addresses. Both IP addresses are
dependencies for the listener client access point. Use the OR operator in the
cluster dependency configuration.
Run the PowerShell script with the cluster network name, IP address, and probe
port that you configured on the load balancer in the new region:
PowerShell
[int]$ProbePort = <nnnnn> # The probe port that you set on the internal
load balancer.
Import-Module FailoverClusters
2. In the browser tree, select SQL Server Services. Right-click the SQL Server
(MSSQLSERVER) service, and then select Properties.
3. Select the AlwaysOn High Availability tab, and then select Enable AlwaysOn
Availability Groups.
4. Select Apply. Select OK in the pop-up dialog.
1. Open a remote desktop session to the primary SQL Server instance in the
availability group, and then open SQL Server Management Studio (SSMS).
4. Select Add Replica and connect to the new SQL Server VM.
) Important
A replica in a remote Azure region should be set to asynchronous replication
with manual failover.
5. On the Select Initial Data Synchronization page, select Full and specify a shared
network location. For the location, use the backup share that you created. In the
example, it was \\<First SQL Server>\Backup\. Then select Next.
7 Note
Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, we
don't recommend full synchronization because it might take a long time.
You can reduce this time by manually backing up the database and restoring
it with NO RECOVERY . If the database is already restored with NO RECOVERY on
the second SQL Server instance before you configure the availability group,
select Join only. If you want to take the backup after you configure the
availability group, select Skip initial data synchronization.
6. On the Validation page, select Next. This page should look similar to the following
image:
7 Note
7. On the Summary page, select Finish, and then wait while the wizard configures the
new availability group. On the Progress page, you can select More details to view
the detailed progress.
After the wizard finishes the configuration, inspect the Results page to verify that
the availability group is successfully created.
The dashboard shows the replicas, the failover mode of each replica, and the
synchronization state.
2. Right-click the listener name and select Properties. Both IP addresses should now
appear for the listener (one in each region).
If you can't modify the connection strings, you can configure name resolution caching.
See Timeout occurs when you connect to an Always On listener in a multi-subnet
environment .
1. In Object Explorer, connect to the instance of SQL Server that hosts the primary
replica.
2. Under Always On Availability Groups, right-click your availability group and select
Properties.
3. On the General page, under Availability Replicas, set the secondary replica on the
disaster recovery (DR) site to use Synchronous Commit availability mode and
Automatic failover mode.
If you have a secondary replica in same site as your primary replica for high
availability, set this replica to Asynchronous Commit and Manual.
4. Select OK.
5. In Object Explorer, right-click the availability group and select Show Dashboard.
7. In Object Explorer, right-click the availability group and select Failover. SQL Server
Management Studio opens a wizard to fail over SQL Server.
8. Select Next, and select the SQL Server instance on the DR site. Select Next again.
9. Connect to the SQL Server instance on the DR site, and then select Next.
10. On the Summary page, verify the settings and select Finish.
After you test connectivity, move the primary replica back to your primary datacenter
and set the availability mode back to its normal operating settings. The following table
shows the normal operating settings for the architecture described in this article:
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
This article explains the steps necessary to create an Active Directory domain-
independent cluster with an Always On availability group; this is also known as a
workgroup cluster. This article focuses on the steps that are relevant to preparing and
configuring the workgroup and availability group, and glosses over steps that are
covered in other articles, such as how to create the cluster, or deploy the availability
group.
Prerequisites
To configure a workgroup availability group, you need the following:
At least two Windows Server 2016 (or higher) virtual machines running SQL Server
2016 (or higher), deployed to the same availability set, or different availability
zones, using static IP addresses.
A local network with a minimum of 4 free IP addresses on the subnet.
An account on each machine in the administrator group that also has sysadmin
rights within SQL Server.
Open ports: TCP 1433, TCP 5022, TCP 59999.
For reference, the following parameters are used in this article, but can be modified as is
necessary:
Name Parameter
2. Select Local Server and then select the name of your virtual machine under
Computer name.
5. Select More... to open the DNS Suffix and NetBIOS Computer Name dialog box.
6. Type the name of your DNS suffix under Primary DNS suffix of this computer,
such as ag.wgcluster.example.com and then select OK:
7. Confirm that the Full computer name is now showing the DNS suffix, and then
select OK to save your changes:
9. Repeat these steps on any other nodes to be used for the availability group.
3. Right-click the hosts file and open the file with Notepad (or any other text editor).
4. At the end of the file, add an entry for each node, the availability group, and the
listener in the form of IP Address, DNS Suffix #comment like:
Set permissions
Since there is no Active Directory to manage permissions, you need to manually allow a
non-builtin local administrator account to create the cluster.
PowerShell
new-itemproperty -path
HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name
LocalAccountTokenFilterPolicy -Value 1
Notable differences between the tutorial and what should be done for a workgroup
cluster:
Uncheck Storage, and Storage Spaces Direct when running the cluster validation.
When adding the nodes to the cluster, add the fully qualified name, such as:
AGNode1.ag.wgcluster.example.com
AGNode2.ag.wgcluster.example.com
Once the cluster has been created, assign a static Cluster IP address. To do so, follow
these steps:
1. On one of the nodes, open Failover Cluster Manager, select the cluster, right-click
the Name: <ClusterNam> under Cluster Core Resources and then select
Properties.
4. Verify that your settings look correct, and then select OK to save them:
Create a cloud witness
In this step, configure a cloud share witness. If you're unfamiliar with the steps, see
Deploy a Cloud Witness for a Failover Cluster.
1. Open SQL Server Management Studio and connect to your first node, such as
AGNode1 .
2. Open a New Query window and run the following Transact-SQL (T-SQL) statement
after updating to a complex and secure password:
SQL
USE master;
GO
USE master;
GO
GO
3. Next, create the HADR endpoint, and use the certificate for authentication by
running this Transact-SQL (T-SQL) statement:
SQL
STATE = STARTED
AS TCP (
LISTENER_PORT=5022
, LISTENER_IP = ALL
FOR DATABASE_MIRRORING (
, ROLE = ALL
);
GO
4. Use File Explorer to go to the file location where your certificate is, such as
c:\certs .
5. Manually make a copy of the certificate, such as AGNode1Cert.crt , from the first
node, and transfer it to the same location on the second node.
1. Connect to the second node with SQL Server Management Studio, such as
AGNode2 .
2. In a New Query window, run the following Transact-SQL (T-SQL) statement after
updating to a complex and secure password:
SQL
USE master;
GO
USE master;
GO
GO
3. Next, create the HADR endpoint, and use the certificate for authentication by
running this Transact-SQL (T-SQL) statement:
SQL
STATE = STARTED
AS TCP (
LISTENER_PORT=5022
, LISTENER_IP = ALL
FOR DATABASE_MIRRORING (
, ROLE = ALL
);
GO
4. Use File Explorer to go to the file location where your certificate is, such as
c:\certs .
5. Manually make a copy of the certificate, such as AGNode2Cert.crt , from the second
node, and transfer it to the same location on the first node.
If there are any other nodes in the cluster, repeat these steps there also, modifying the
respective certificate names.
Create logins
Certificate authentication is used to synchronize data across nodes. To allow this, create
a login for the other node, create a user for the login, create a certificate for the login to
use the backed-up certificate, and then grant connect on the mirroring endpoint.
To do so, first run the following Transact-SQL (T-SQL) query on the first node, such as
AGNode1 :
SQL
USE master;
GO
GO
AUTHORIZATION AGNode2_User
GO
GO
Next, run the following Transact-SQL (T-SQL) query on the second node, such as
AGNode2 :
SQL
USE master;
GO
GO
AUTHORIZATION AGNode1_User
GO
GO
If there are any other nodes in the cluster, repeat these steps there also, modifying the
respective certificate and user names.
7 Note
If there is a failure during the synchronization process, you may need to grant NT
AUTHORITY\SYSTEM sysadmin rights to create cluster resources on the first node, such
as AGNode1 temporarily.
However, there may be some limitations when using the Windows Cluster GUI, and as
such, you should use PowerShell to create a client access point or the network name for
your listener with the following example script:
PowerShell
Next steps
Once the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.
Applies to:
SQL Server on Azure VM
Tip
This tutorial shows how to complete the prerequisites for creating a SQL Server Always
On availability group on Azure virtual machines within a single subnet. When you've
completed the prerequisites, you'll have a domain controller, two SQL Server VMs, and a
witness server in a single resource group.
This article manually configures the availability group environment. It's also possible to
automate the steps by using the Azure portal, PowerShell or the Azure CLI, or Azure
quickstart templates.
Time estimate: It might take a couple of hours to complete the prerequisites. You'll
spend much of this time creating virtual machines.
It's now possible to lift and shift your availability group solution to SQL Server on
Azure VMs by using Azure Migrate. To learn more, see Migrate an availability
group.
4. On the Create a resource group page, fill out the values to create the resource
group:
a. Choose the appropriate Azure subscription from the dropdown list.
b. Provide a name for your resource group, such as SQL-HA-RG.
c. Choose a region from the dropdown list, such as West US 2. Be sure to deploy
all subsequent resources to this location.
d. Select Review + create to review your resource parameters, and then select
Create to create your resource group.
The solution in this tutorial uses one virtual network and one subnet. The virtual network
overview provides more information about networks in Azure.
To create the virtual network in the Azure portal, follow these steps:
2. Search for virtual network in the Marketplace search box, and then choose the
Virtual network tile from Microsoft. Select Create.
3. On the Create virtual network page, enter the following information on the Basics
tab:
a. Under Project details, for Subscription, choose the appropriate Azure
subscription. For Resource group, select the resource group that you created
previously, such as SQL-HA-RG.
b. Under Instance details, provide a name for your virtual network, such as
autoHAVNET. In the dropdown list, choose the same region that you chose for
your resource group.
4. On the IP addresses tab, select the ellipsis (...) next to + Add a subnet. Select
Delete address space to remove the existing address space, if you need a different
address range.
5. Select Add an IP address space to open the pane to create the address space that
you need. This tutorial uses the address space of 192.168.0.0/16 (192.168.0.0 for
Starting address and /16 for Address space size). Select Add to create the address
space.
b. Provide a unique subnet address range within the virtual network address
space.
Azure returns you to the portal dashboard and notifies you when the new network is
created.
An Azure availability set is a logical group of resources that Azure places on these
physical domains:
Fault domain: Ensures that the members of the availability set have separate
power and network resources.
Update domain: Ensures that members of the availability set aren't brought down
for maintenance at the same time.
You need two availability sets. One is for the domain controllers. The second is for the
SQL Server VMs.
Configure two availability sets according to the parameters in the following table:
Fault domains 3 3
Update domains 5 3
After you create the availability sets, return to the resource group in the Azure portal.
7 Note
The following table shows the settings for these two machines:
Field Value
Field Value
Password Contoso!0000
Size DS1_V2
Subnet admin
Fault domains: 3
Update domains: 5
Diagnostics Enabled
) Important
You can place a VM in an availability set only when you create it. You can't change
the availability set after a VM is created. See Manage the availability of virtual
machines.
1. In the portal, open the SQL-HA-RG resource group and select the ad-primary-dc
machine. On ad-primary-dc, select Connect to open a Remote Desktop Protocol
(RDP) file for remote desktop access.
3. By default, the Server Manager dashboard should be displayed. Select the Add
roles and features link on the dashboard.
5. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any features that these roles require.
7 Note
Windows warns you that there is no static IP address. If you're testing the
configuration, select Continue. For production scenarios, set the IP address to
static in the Azure portal, or use PowerShell to set the static IP address of the
domain controller machine.
6. Select Next until you reach the Confirmation section. Select the Restart the
destination server automatically if required checkbox.
7. Select Install.
8. After installation of the features finishes, return to the Server Manager dashboard.
11. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.
12. In the Active Directory Domain Services Configuration Wizard, use the following
values:
Page Setting
13. Select Next to go through the other pages in the wizard. On the Prerequisites
Check page, verify that the following message appears: "All prerequisite checks
passed successfully." You can review any applicable warning messages, but it's
possible to continue with the installation.
One way to get the primary domain controller's IP address is through the Azure portal:
3. Select Custom, and enter the private IP address of the primary domain controller.
4. Select Save.
The preferred DNS server address should not be updated directly within a VM, it should
be edited from the Azure portal, or Powershell, or Azure CLI. The steps below are to
make the change inside of the Azure portal:
2. In the search box at the top of the portal, enter Network interface. Select Network
interfaces in the search results.
3. Select the network interface for the second domain controller that you want to
view or change settings for from the list.
5. Select either:
Inherit from virtual network: Choose this option to inherit the DNS server
setting defined for the virtual network the network interface is assigned to.
This would automatically inherit the primary domain controller as the DNS
server.
Custom: You can configure your own DNS server to resolve names across
multiple virtual networks. Enter the IP address of the server you want to use
as a DNS server. The DNS server address you specify is assigned only to this
network interface and overrides any DNS setting for the virtual network the
network interface is assigned to. If you select custom, then input the IP
address of the primary domain controller, such as 192.168.15.4 .
6. Select Save. If using a Custom DNS Server, return to the virtual machine in the
Azure portal and restart the VM. Once the virtual machine has restarted, you can
join the VM to the domain.
Once your server has joined the domain, you can configure it as the second domain
controller. To do so, follow these steps:
1. If you're not already connected, open an RDP session to your secondary domain
controller, and open Server Manager Dashboard (which may be open by default).
4. Select the Active Directory Domain Services and DNS Server roles. When you're
prompted, add any additional features that are required by these roles.
5. After the features finish installing, return to the Server Manager dashboard.
8. In the Action column of the All Server Task Details dialog, select Promote this
server to a domain controller.
12. In Select a domain from the forest, choose your domain and then select OK.
13. In Domain Controller Options, use the default values and set a DSRM password.
7 Note
The DNS Options page might warn you that a delegation for this DNS server
can't be created. You can ignore this warning in non-production
environments.
14. Select Next until the dialog reaches the Prerequisites check. Then select Install.
After the server finishes the configuration changes, restart the server.
Install Both Corp\Install Log into either VM with this account to configure
the cluster and availability group.
SQLSvc Both (sqlserver-0 Corp\SQLSvc Use this account for the SQL Server service and
and sqlserver-1) SQL Agent Service account on the both SQL Server
VMs.
2. In Server Manager, select Tools, and then select Active Directory Administrative
Center.
Tip
2. Select Extensions, and then select the Advanced button on the Security tab.
7. Select OK, and then select OK again. Close the corp properties window.
Now that you've finished configuring Active Directory and the user objects, you can
create additional VMs that you'll join to the domain.
For the virtual machine storage, use Azure managed disks. We recommend
managed disks for SQL Server virtual machines. Managed disks handle storage
behind the scenes. In addition, when virtual machines with managed disks are in
the same availability set, Azure distributes the storage resources to provide
appropriate redundancy.
For more information, see Introduction to Azure managed disks. For specifics
about managed disks in an availability set, see Availability options for Azure virtual
machines.
For the virtual machines, this tutorial uses public IP addresses. A public IP address
enables remote connection directly to a virtual machine over the internet and
makes configuration steps easier. In production environments, we recommend
only private IP addresses to reduce the vulnerability footprint of the SQL Server
instance's VM resource.
Use a single network interface card (NIC) per server (cluster node) and a single
subnet. Azure networking has physical redundancy, which makes additional NICs
and subnets unnecessary on an Azure VM guest cluster.
The cluster validation report will warn you that the nodes are reachable only on a
single network. You can ignore this warning on Azure VM guest failover clusters.
2. Search for the appropriate gallery item, select Virtual Machine, and then select
From Gallery.
3. Use the information in the following table to finish creating the three VMs:
Select the Windows Server 2016 SQL Server 2016 SP1 SQL Server 2016 SP1
appropriate Datacenter Enterprise on Enterprise on
gallery item Windows Server 2016 Windows Server 2016
machine
configuration: User Name = User Name = User Name =
Basics DomainAdmin
DomainAdmin
DomainAdmin
configuration:
Settings Virtual network = Virtual network = Virtual network =
autoHAVNET
autoHAVNET
autoHAVNET
SQL Server
settings Port = 1433
Port = 1433
7 Note
The machine sizes suggested here are meant for testing availability groups in Azure
virtual machines. For the best performance on production workloads, see the
recommendations for SQL Server machine sizes and configuration in Performance
best practices for SQL Server in Azure virtual machines.
After the three VMs are fully provisioned, you need to join them to the
corp.contoso.com domain and grant CORP\Install administrative rights to the
machines.
Add accounts
Add the installation account as an administrator on each VM, grant permission to the
installation account and local accounts within SQL Server, and update the SQL Server
service account.
1. Wait until the VM is restarted, and then open the RDP file again from the primary
domain controller. Sign in to sqlserver-0 by using the CORP\DomainAdmin
account.
Tip
3. In the Computer Management window, expand Local Users and Groups, and then
select Groups.
The following steps create a sign-in for the installation account. Complete them on both
SQL Server VMs.
2. Open SQL Server Management Studio and connect to the local instance of SQL
Server.
6. Select Locations.
7. Enter the network credentials for the domain administrator. Use the installation
account (CORP\install).
9. Select OK.
SQL
USE [master]
GO
GO
SQL
GO
GO
GO
For SQL Server availability groups, each SQL Server VM needs to run as a domain
account.
1. Connect to the SQL Server virtual machine through RDP by using the CORP\install
account. Open the Server Manager dashboard.
6. Select Install.
7 Note
You can now automate this task, along with actually joining the SQL Server VMs to
the failover cluster, by using the Azure CLI and Azure quickstart templates.
SQL Server VM: Port 1433 for a default instance of SQL Server.
Azure load balancer probe: Any available port. Examples frequently use 59999.
Load balancer IP address health probe for cluster core: Any available port.
Examples frequently use 58888.
Database mirroring endpoint: Any available port. Examples frequently use 5022.
The firewall ports need to be open on both SQL Server VMs. The method of opening the
ports depends on the firewall solution that you use. The following steps show how to
open the ports in Windows Firewall:
1. On the first SQL Server Start screen, open Windows Firewall with Advanced
Security.
2. On the left pane, select Inbound Rules. On the right pane, select New Rule.
4. For the port, specify TCP and enter the appropriate port numbers. The following
screenshot shows an example:
5. Select Next.
6. On the Action page, keep Allow the connection selected, and then select Next.
7. On the Profile page, accept the default settings, and then select Next.
8. On the Name page, specify a rule name (such as Azure LB Probe) in the Name
box, and then select Finish.
Next steps
Now that you've configured the prerequisites, get started with configuring your
availability group.
Applies to:
SQL Server on Azure VM
Tip
This tutorial shows how to create an Always On availability group for SQL Server on
Azure VMs within a single subnet. The complete tutorial creates an availability group
with a database replica on two SQL Server instances.
This article manually configures the availability group environment. It's also possible to
automate the steps by using the Azure portal, PowerShell or the Azure CLI, or Azure
Quickstart Templates.
Time estimate: This tutorial takes about 30 minutes to complete after you meet the
prerequisites.
Prerequisites
The tutorial assumes that you have a basic understanding of SQL Server Always On
availability groups. If you need more information, see Overview of Always On availability
groups (SQL Server).
Before you begin the procedures in this tutorial, you need to complete prerequisites for
creating Always On availability groups in Azure virtual machines. If you completed these
prerequisites already, you can jump to Create the cluster.
The following table summarizes the prerequisites that you need before you can
complete this tutorial:
Requirement Description
Two SQL - In an Azure availability set
Windows File share for a cluster witness
Server
SQL Server Domain account
service account
SQL Server Domain account
Agent service
account
Firewall ports - SQL Server: 1433 for a default instance
- Load balancer IP address health probe for an availability group: 59999 or any
available port
- Load balancer IP address health probe for cluster core: 58888 or any
available port
Failover Required for both SQL Server instances
clustering
Installation - Local administrator on each SQL Server instance
domain account - Member of the sysadmin fixed server role for each SQL Server instance
Network If the environment is using Network security groups, ensure that the current
Security Groups configuration allows Network traffic through ports described in Configure the
(NSGs) firewall.
1. Use Remote Desktop Protocol (RDP) to connect to the first SQL Server VM. Use a
domain account that's an administrator on both SQL Server VMs and the witness
server.
Tip
3. On the left pane, right-click Failover Cluster Manager, and then select Create
Cluster.
4. In the Create Cluster Wizard, create a one-node cluster by stepping through the
pages with the settings in the following table:
Page Setting
Select Servers Enter the first SQL Server VM name in Enter server name, and then
select Add.
Validation Select No. I do not require support from Microsoft for this cluster, and
Warning therefore do not want to run the validation tests. When I select Next,
continue Creating the cluster.
Access Point for In Cluster Name, enter a cluster name (for example, SQLAGCluster1).
Administering
the Cluster
7 Note
On Windows Server 2019, the cluster creates a Distributed Server Name value
instead of the Cluster Network Name value. If you're using Windows Server 2019,
skip any steps that refer to the cluster core name in this tutorial. You can create a
cluster network name by using PowerShell. For more information, review the blog
post Failover Cluster: Cluster Network Object .
1. In Failover Cluster Manager, scroll down to Cluster Core Resources and expand
the cluster details. Both the Name and IP Address resources should be in the
Failed state.
The IP address resource can't be brought online because the cluster is assigned the
same IP address as the machine itself. It's a duplicate address.
3. Select Static IP Address. Specify an available address from the same subnet as
your virtual machines.
4. In the Cluster Core Resources section, right-click the cluster name and select Bring
Online. Wait until both resources are online.
When the cluster name resource comes online, it updates the domain controller
server with a new Active Directory computer account. Use this Active Directory
account to run the availability group's clustered service later.
3. On the Select Servers page, add the second SQL Server VM. Enter the VM name in
Enter server name, and then select Add > Next.
4. On the Validation Warning page, select No. (In a production scenario, you should
perform the validation tests.) Then, select Next.
5. On the Confirmation page, if you're using Storage Spaces, clear the Add all
eligible storage to the cluster checkbox.
2 Warning
If you don't clear Add all eligible storage to the cluster, Windows detaches
the virtual disks during the clustering process. As a result, they don't appear in
Disk Manager or Object Explorer until the storage is removed from the
cluster and reattached via PowerShell.
6. Select Next.
7. Select Finish.
Failover Cluster Manager shows that your cluster has a new node and lists it in the
Nodes container.
1. Connect to the file share witness server VM by using a remote desktop session.
5. On the Folder Path page, select Browse. Locate or create a path for the shared
folder, and then select Next.
6. On the Name, Description, and Settings page, verify the share name and path.
Select Next.
9. Make sure that the account that's used to create the cluster has full control.
10. Select OK.
11. On the Shared Folder Permissions page, select Finish. Then select Finish again.
7 Note
2. In Failover Cluster Manager, right-click the cluster, point to More Actions, and
then select Configure Cluster Quorum Settings.
3. In the Configure Cluster Quorum Wizard, select Next.
4. On the Select Quorum Configuration Option page, choose Select the quorum
witness, and then select Next.
5. On the Select Quorum Witness page, select Configure a file share witness.
Tip
Windows Server 2016 supports a cloud witness. If you choose this type of
witness, you don't need a file share witness. For more information, see Deploy
a cloud witness for a failover cluster. This tutorial uses a file share witness,
which previous operating systems support.
6. In Configure File Share Witness, enter the path for the share that you created.
Then select Next.
8. Select Finish.
The cluster core resources are configured with a file share witness.
2. In the browser tree, select SQL Server Services. Then right-click the SQL Server
(MSSQLSERVER) service and select Properties.
3. Select the Always On High Availability tab, and then select Enable Always On
availability groups.
4. On the Folder Path page, select Browse. Locate or create a path for the database
backup's shared folder, and then select Next.
5. On the Name, Description, and Settings page, verify the share name and path.
Then select Next.
6. On the Shared Folder Permissions page, set Customize permissions. Then select
Custom.
8. Make sure that the accounts for the SQL Server and SQL Server Agent service on
both servers have full control.
9. Select OK.
10. On the Shared Folder Permissions page, select Finish. Select Finish again.
1. In Object Explorer, right-click the database, point to Tasks, and then select Back
Up.
2. In Object Explorer in SSMS, right-click Always On High Availability and select New
Availability Group Wizard.
3. On the Introduction page, select Next. On the Specify Availability Group Options
page, enter a name for the availability group in the Availability group name box.
For example, enter MyTestAG. Then select Next.
4. On the Select Databases page, select your database, and then select Next.
7 Note
The database meets the prerequisites for an availability group because you've
taken at least one full backup on the intended primary replica.
Back on the Specify Replicas page, you should now see the second server listed
under Availability Replicas. Configure the replicas as follows.
7. Select Endpoints to see the database mirroring endpoint for this availability group.
Use the same port that you used when you set the firewall rule for database
mirroring endpoints.
8. On the Select Initial Data Synchronization page, select Full and specify a shared
network location. For the location, use the backup share that you created. In the
example, it was \\<First SQL Server Instance>\Backup\. Select Next.
7 Note
Full synchronization takes a full backup of the database on the first instance
of SQL Server and restores it to the second instance. For large databases, we
don't recommend full synchronization because it might take a long time.
You can reduce this time by manually taking a backup of the database and
restoring it with NO RECOVERY . If the database is already restored with NO
RECOVERY on the second SQL Server instance before you configure the
availability group, select Join only. If you want to take the backup after
configuring the availability group, select Skip initial data synchronization.
9. On the Validation page, select Next. This page should look similar to the following
image:
7 Note
10. On the Summary page, select Finish, and then wait while the wizard configures the
new availability group. On the Progress page, you can select More details to view
the detailed progress.
After the wizard finishes the configuration, inspect the Results page to verify that
the availability group is successfully created.
11. Select Close to close the wizard.
The dashboard shows the replicas, the failover mode of each replica, and the
synchronization state.
The availability group name that you used is a role on the cluster. That availability
group doesn't have an IP address for client connections because you didn't
configure a listener. You'll configure the listener after you create an Azure load
balancer.
2 Warning
Don't try to fail over the availability group from Failover Cluster Manager. All
failover operations should be performed on the availability group dashboard
in SSMS. Learn more about restrictions on using Failover Cluster Manager
with availability groups.
At this point, you have an availability group with two SQL Server replicas. You can move
the availability group between instances. You can't connect to the availability group yet
because you don't have a listener.
In Azure virtual machines, the listener requires a load balancer. The next step is to create
the load balancer in Azure.
7 Note
On Azure virtual machines in a single subnet, a SQL Server availability group requires a
load balancer. The load balancer holds the IP addresses for the availability group
listeners and the Windows Server failover cluster. This section summarizes how to create
the load balancer in the Azure portal.
A load balancer in Azure can be either standard or basic. A standard load balancer has
more features than the basic load balancer. For an availability group, the standard load
balancer is required if you use an availability zone (instead of an availability set). For
details on the difference between the SKUs, see Azure Load Balancer SKUs.
) Important
On September 30, 2025, the Basic SKU for Azure Load Balancer will be retired. For
more information, see the official announcement . If you're currently using Basic
Load Balancer, upgrade to Standard Load Balancer before the retirement date. For
guidance, review Upgrade Load Balancer.
1. In the Azure portal, go to the resource group that contains your SQL Server VMs
and select + Add.
2. Search for load balancer. Choose the load balancer that Microsoft publishes.
3. Select Create.
4. On the Create load balancer page, configure the following parameters for the load
balancer:
Resource group Use the same resource group as the virtual machine.
Name Use a text name for the load balancer, such as sqlLB.
9. Choose Review + Create to validate the configuration. Then select Create to create
the load balancer and the frontend IP address.
To configure the load balancer, you need to create a backend pool, create a probe, and
set the load-balancing rules.
2. Select the load balancer, select Backend pools, and then select +Add.
5. Select Add to associate the backend pool with the availability set that contains the
VMs.
6. Under Virtual machine, choose the virtual machines that will host availability
group replicas. Don't include the file share witness server.
7 Note
If both virtual machines are not specified, only connections to the primary
replica will succeed.
3. Select Add.
Frontend IP Choose an address Use the address that you created when
address you created the load balancer.
Backend pool Choose the backend pool Select the backend pool that contains the
virtual machines targeted for the load
balancer.
Direct server return is set during creation. You can't change it.
3. Select Save.
1. In the Azure portal, go to the same Azure load balancer. Select Frontend IP
configuration, and then select +Add. Use the IP address that you configured for
the Windows Server failover cluster in the cluster core resources. Set the IP address
as Static.
2. On the load balancer, select Health probes, and then select +Add.
3. Set the cluster core IP address health probe for the Windows Server failover cluster
as follows:
6. Set the load-balancing rules for the cluster core IP address as follows:
Frontend Choose an address Use the address that you created when you
IP address configured the IP address for the Windows
Server failover cluster. This is different from
the listener IP address.
Backend Choose the backend pool Select the backend pool that contains the
pool virtual machines targeted for the load
balancer.
2 Warning
Direct server return is set during creation. You can't change it.
7. Select OK.
7 Note
This tutorial shows how to create a single listener, with one IP address for the
internal load balancer. To create listeners by using one or more IP addresses, see
Configure one or more Always On availability group listeners.
The availability group listener is an IP address and network name that the SQL Server
availability group listens on. To create the availability group listener:
a. Use RDP to connect to the Azure virtual machine that hosts the primary replica.
c. Select the Networks node, and note the cluster network name. Use this name in
the $ClusterNetworkName variable in the PowerShell script. In the following image,
the cluster network name is Cluster Network 1:
2. Add the client access point. The client access point is the network name that
applications use to connect to the databases in an availability group.
a. In Failover Cluster Manager, expand the cluster name, and then select Roles.
b. On the Roles pane, right-click the availability group name, and then select Add
Resource > Client Access Point.
c. In the Name box, create a name for this new listener.
The name for the new
listener is the network name that applications use to connect to databases in the
SQL Server availability group.
d. To finish creating the listener, select Next twice, and then select Finish. Don't
bring the listener or resource online at this point.
3. Take the cluster role for the availability group offline. In Failover Cluster Manager,
under Roles, right-click the role, and then select Stop Role.
a. Select the Resources tab, and then expand the client access point that you
created. The client access point is offline.
b. Right-click the IP resource, and then select Properties. Note the name of the IP
address, and use it in the $IPResourceName variable in the PowerShell script.
c. Under IP Address, select Static IP Address. Set the IP address as the same
address that you used when you set the load balancer address on the Azure portal.
5. Make the SQL Server availability group dependent on the client access point:
a. In Failover Cluster Manager, select Roles, and then select your availability group.
b. On the Resources tab, under Other Resources, right-click the availability group
resource, and then select Properties.
c. On the Dependencies tab, add the name of the client access point (the listener).
d. Select OK.
a. In Failover Cluster Manager, select Roles, and then select your availability group.
b. On the Resources tab, right-click the client access point under Server Name,
and then select Properties.
c. Select the Dependencies tab. Verify that the IP address is a dependency. If it
isn't, set a dependency on the IP address. If multiple resources are listed, verify that
the IP addresses have OR, not AND, dependencies. Then select OK.
Tip
You can validate that the dependencies are correctly configured. In Failover
Cluster Manager, go to Roles, right-click the availability group, select More
Actions, and then select Show Dependency Report. When the dependencies
are correctly configured, the availability group is dependent on the network
name, and the network name is dependent on the IP address.
a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.
$ListenerILBIP is the IP address that you created on the Azure load balancer
for the availability group listener. Find the $ListenerILBIP in the Failover
Cluster Manager on the same properties page as the SQL Server AG/FCI
Listener Resource Name.
balancer for the availability group listener, such as 59999. Any unused TCP
port is valid.
PowerShell
[int]$ListenerProbePort = <nnnnn>
Import-Module FailoverClusters
b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
7 Note
If your SQL Server instances are in separate regions, you need to run the
PowerShell script twice. The first time, use the $ListenerILBIP and
$ListenerProbePort values from the first region. The second time, use the
$ListenerILBIP and $ListenerProbePort values from the second region. The
cluster network name and the cluster IP resource name are also different for
each region.
8. Bring the cluster role for the availability group online. In Failover Cluster Manager,
under Roles, right-click the role, and then select Start Role.
If necessary, repeat the preceding steps to set the cluster parameters for the IP address
of the Windows Server failover cluster:
1. Get the IP address name of the Windows Server failover cluster. In Failover Cluster
Manager, under Cluster Core Resources, locate Server Name.
3. Copy the name of the IP address from Name. It might be Cluster IP Address.
a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.
$ClusterCoreIP is the IP address that you created on the Azure load balancer
for the Windows Server failover cluster's core cluster resource. It's different
from the IP address for the availability group listener.
$ClusterProbePort is the port that you configured on the Azure load balancer
for the Windows Server failover cluster's health probe. It's different from the
probe for the availability group listener.
PowerShell
Import-Module FailoverClusters
b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
If any SQL resource is configured to use a port between 49152 and 65536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Such resources might
include:
Adding an exclusion will prevent other system processes from being dynamically
assigned to the same port. For this scenario, configure the following exclusions on all
cluster nodes:
It's important to configure the port exclusion when the port is not in use. Otherwise, the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions are configured correctly,
use the following command: netsh int ipv4 show excludedportrange tcp .
2 Warning
The port for the availability group listener's health probe has to be different from
the port for the cluster core IP address's health probe. In these examples, the
listener port is 59999 and the cluster core IP address's health probe port is 58888.
Both ports require an "allow inbound" firewall rule.
1. Open SQL Server Management Studio and connect to the primary replica.
3. Right-click the listener name that you created in Failover Cluster Manager, and
then select Properties.
4. In the Port box, specify the port number for the availability group listener. The
default is 1433. Select OK.
You now have an availability group for SQL Server on Azure VMs running in Azure
Resource Manager mode.
1. Use RDP to connect to a SQL Server VM that's in the same virtual network but
doesn't own the replica, such as the other replica.
2. Use the sqlcmd utility to test the connection. For example, the following script
establishes a sqlcmd connection to the primary replica through the listener by
using Windows authentication:
sqlcmd -S <listenerName> -E
If the listener is using a port other than the default port (1433), specify the port in
the connection string. For example, the following command connects to a listener
at port 1435:
sqlcmd -S <listenerName>,1435 -E
The sqlcmd utility automatically connects to whichever SQL Server instance is the
current primary replica of the availability group.
Tip
Make sure that the port you specify is open on the firewall of both SQL Server VMs.
Both servers require an inbound rule for the TCP port that you use. For more
information, see Add or edit firewall rules.
Next steps
Add an IP address to a load balancer for a second availability group
Configure automatic or manual failover
Applies to:
SQL Server on Azure VM
Tip
This article explains how to create a load balancer for a SQL Server Always On
availability group in Azure Virtual Machines within a single subnet that are running with
Azure Resource Manager. An availability group requires a load balancer when the SQL
Server instances are on Azure Virtual Machines. The load balancer stores the IP address
for the availability group listener. If an availability group spans multiple regions, each
region needs a load balancer.
To complete this task, you need to have a SQL Server Always On availability group
deployed in Azure VMs that are running with Resource Manager. Both SQL Server virtual
machines must belong to the same availability set. You can use the Microsoft template
to automatically create the availability group in Resource Manager. This template
automatically creates an internal load balancer for you.
This article requires that your availability groups are already configured.
1. In the Azure portal, create the load balancer and configure the IP address.
2. Configure the back-end pool.
3. Create the probe.
4. Set the load-balancing rules.
7 Note
If the SQL Server instances are in multiple resource groups and regions, perform
each step twice, once in each resource group.
) Important
On September 30, 2025, the Basic SKU for the Azure Load Balancer will be retired.
For more information, see the official announcement . If you're currently using
Basic Load Balancer, upgrade to Standard Load Balancer prior to the retirement
date. For guidance, review upgrade load balancer.
1. In the Azure portal, open the resource group that contains the SQL Server virtual
machines.
3. Search for load balancer. Choose Load Balancer (published by Microsoft) in the
search results.
Resource Group Use the same resource group as the virtual machine.
Name Use a text name for the load balancer, for example sqlLB.
SKU Standard
Type Internal
10. Choose Review + Create to validate the configuration, and then Create to create
the load balancer and the frontend IP.
Azure creates the load balancer. The load balancer belongs to a specific network,
subnet, resource group, and location. After Azure completes the task, verify the load
balancer settings in Azure.
To configure the load balancer, you need to create a backend pool, a probe, and set the
load balancing rules. Do these in the Azure portal.
Step 2: Configure the backend pool
Azure calls the back-end address pool backend pool. In this case, the backend pool is the
addresses of the two SQL Server instances in your availability group.
1. In the Azure portal, go to your availability group. You might need to refresh the
view to see the newly created load balancer.
2. Select the load balancer, select Backend pools, and select +Add.
5. Select Add to associate the backend pool with the availability set that contains the
VMs.
6. Under Virtual machine choose the SQL Server virtual machines that will host
availability group replicas.
7 Note
If both virtual machines are not specified, connections will only succeed to the
primary replica.
Azure updates the settings for the back-end address pool. Now your availability set has
a pool of two SQL Server instances.
1. Select the load balancer, choose Health probes, and then select +Add.
7 Note
Make sure that the port you specify is open on the firewall of both SQL Server
instances. Both instances require an inbound rule for the TCP port that you use. For
more information, see Add or Edit Firewall Rule.
Azure creates the probe and then uses it to test which SQL Server instance has the
listener for the availability group.
1. Select the load balancer, choose Load balancing rules, and select +Add.
Frontend IP Choose an address Use the address that you created when
address you created the load balancer.
Backend pool Choose the backend pool Select the backend pool containing the
virtual machines targeted for the load
balancer.
2 Warning
7 Note
You might have to scroll down the blade to view all the settings.
Azure configures the load-balancing rule. Now the load balancer is configured to route
traffic to the SQL Server instance that hosts the listener for the availability group.
At this point, the resource group has a load balancer that connects to both SQL Server
machines. The load balancer also contains an IP address for the SQL Server Always On
availability group listener, so that either machine can respond to requests for the
availability groups.
7 Note
If your SQL Server instances are in two separate regions, repeat the steps in the
other region. Each region requires a load balancer.
1. In the Azure portal, go to the same Azure load balancer. Select Frontend IP
configuration and select +Add. Use the IP Address you configured for the WSFC in
the cluster core resources. Set the IP address as static.
2. On the load balancer, select Health probes, and then select +Add.
5. Set the load balancing rules. Select Load balancing rules, and select +Add.
Frontend Choose an address Use the address that you created when
IP address you configured the WSFC IP address.
This is different from the listener IP
address
Backend Choose the backend pool Select the backend pool containing the
pool virtual machines targeted for the load
balancer.
2 Warning
The availability group listener is an IP address and network name that the SQL Server
availability group listens on. To create the availability group listener:
a. Use RDP to connect to the Azure virtual machine that hosts the primary replica.
c. Select the Networks node, and note the cluster network name. Use this name in
the $ClusterNetworkName variable in the PowerShell script. In the following image,
the cluster network name is Cluster Network 1:
2. Add the client access point. The client access point is the network name that
applications use to connect to the databases in an availability group.
a. In Failover Cluster Manager, expand the cluster name, and then select Roles.
b. On the Roles pane, right-click the availability group name, and then select Add
Resource > Client Access Point.
c. In the Name box, create a name for this new listener.
The name for the new
listener is the network name that applications use to connect to databases in the
SQL Server availability group.
d. To finish creating the listener, select Next twice, and then select Finish. Don't
bring the listener or resource online at this point.
3. Take the cluster role for the availability group offline. In Failover Cluster Manager,
under Roles, right-click the role, and then select Stop Role.
a. Select the Resources tab, and then expand the client access point that you
created. The client access point is offline.
b. Right-click the IP resource, and then select Properties. Note the name of the IP
address, and use it in the $IPResourceName variable in the PowerShell script.
c. Under IP Address, select Static IP Address. Set the IP address as the same
address that you used when you set the load balancer address on the Azure portal.
5. Make the SQL Server availability group dependent on the client access point:
a. In Failover Cluster Manager, select Roles, and then select your availability group.
b. On the Resources tab, under Other Resources, right-click the availability group
resource, and then select Properties.
c. On the Dependencies tab, add the name of the client access point (the listener).
d. Select OK.
a. In Failover Cluster Manager, select Roles, and then select your availability group.
b. On the Resources tab, right-click the client access point under Server Name,
and then select Properties.
c. Select the Dependencies tab. Verify that the IP address is a dependency. If it
isn't, set a dependency on the IP address. If multiple resources are listed, verify that
the IP addresses have OR, not AND, dependencies. Then select OK.
Tip
You can validate that the dependencies are correctly configured. In Failover
Cluster Manager, go to Roles, right-click the availability group, select More
Actions, and then select Show Dependency Report. When the dependencies
are correctly configured, the availability group is dependent on the network
name, and the network name is dependent on the IP address.
a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.
$ListenerILBIP is the IP address that you created on the Azure load balancer
for the availability group listener. Find the $ListenerILBIP in the Failover
Cluster Manager on the same properties page as the SQL Server AG/FCI
Listener Resource Name.
balancer for the availability group listener, such as 59999. Any unused TCP
port is valid.
PowerShell
[int]$ListenerProbePort = <nnnnn>
Import-Module FailoverClusters
b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
7 Note
If your SQL Server instances are in separate regions, you need to run the
PowerShell script twice. The first time, use the $ListenerILBIP and
$ListenerProbePort values from the first region. The second time, use the
$ListenerILBIP and $ListenerProbePort values from the second region. The
cluster network name and the cluster IP resource name are also different for
each region.
8. Bring the cluster role for the availability group online. In Failover Cluster Manager,
under Roles, right-click the role, and then select Start Role.
If necessary, repeat the preceding steps to set the cluster parameters for the IP address
of the Windows Server failover cluster:
1. Get the IP address name of the Windows Server failover cluster. In Failover Cluster
Manager, under Cluster Core Resources, locate Server Name.
3. Copy the name of the IP address from Name. It might be Cluster IP Address.
a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.
$ClusterCoreIP is the IP address that you created on the Azure load balancer
for the Windows Server failover cluster's core cluster resource. It's different
from the IP address for the availability group listener.
$ClusterProbePort is the port that you configured on the Azure load balancer
for the Windows Server failover cluster's health probe. It's different from the
probe for the availability group listener.
PowerShell
Import-Module FailoverClusters
b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
If any SQL resource is configured to use a port between 49152 and 65536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Such resources might
include:
Adding an exclusion will prevent other system processes from being dynamically
assigned to the same port. For this scenario, configure the following exclusions on all
cluster nodes:
It's important to configure the port exclusion when the port is not in use. Otherwise, the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions are configured correctly,
use the following command: netsh int ipv4 show excludedportrange tcp .
2 Warning
The port for the availability group listener's health probe has to be different from
the port for the cluster core IP address's health probe. In these examples, the
listener port is 59999 and the cluster core IP address's health probe port is 58888.
Both ports require an "allow inbound" firewall rule.
1. Start SQL Server Management Studio, and then connect to the primary replica.
4. In the Port box, specify the port number for the availability group listener by using
the $EndpointPort you used earlier (1433 was the default), and then select OK.
You now have an availability group in Azure virtual machines running in Resource
Manager mode.
1. Use remote desktop protocol (RDP) to connect to a SQL Server instance that's in
the same virtual network, but doesn't own the replica. This server can be the other
SQL Server instance in the cluster.
2. Use sqlcmd utility to test the connection. For example, the following script
establishes a sqlcmd connection to the primary replica through the listener with
Windows authentication:
Console
sqlcmd -S <listenerName> -E
The SQLCMD connection automatically connects to the SQL Server instance that hosts
the primary replica.
To add an IP address to a load balancer with the Azure portal, do the following steps:
1. In the Azure portal, open the resource group that contains the load balancer, and
then select the load balancer.
2. Under Settings, select Frontend IP configuration, and then select + Add.
3. Under Add frontend IP address, assign a name for the front end.
4. Verify that the Virtual network and the Subnet are the same as the SQL Server
instances.
Tip
You can set the IP address to static and type an address that is not currently
used in the subnet. Alternatively, you can set the IP address to dynamic and
save the new front-end IP pool. When you do so, the Azure portal
automatically assigns an available IP address to the pool. You can then reopen
the front-end IP pool and change the assignment to static.
7. Add a health probe selecting Health probes under Settings and use the following
settings:
Setting Value
Protocol TCP
Port An unused TCP port, which must be available on all virtual machines. It can't be
used for any other purpose. No two listeners can use the same probe port.
Interval The amount of time between probe attempts. Use the default (5).
9. Create a load-balancing rule. Under Settings, select Load balancing rules, and
then select + Add.
10. Configure the new load-balancing rule by using the following settings:
Setting Value
Backend pool The pool that contains the virtual machines with the SQL Server
instances.
Protocol TCP
Port Use the port that the SQL Server instances are using. A default
instance uses port 1433, unless you changed it.
After you've added an IP address for the listener, configure the additional availability
group by doing the following steps:
1. Verify that the probe port for the new IP address is open on both SQL Server
virtual machines.
) Important
When you create the IP address, use the IP address that you added to the
load balancer.
4. Make the SQL Server availability group resource dependent on the client access
point.
If you're on the secondary replica VM, and you're unable to connect to the listener, it's
possible the probe port was not configured correctly.
You can use the following script to validate the probe port is correctly configured for the
availability group:
PowerShell
Clear-Host
Get-ClusterResource `
| Get-ClusterParameter `
) Important
2. In the Azure portal, select the load balancer and select Load balancing rules, and
then select +Add.
Setting Value
Name A name to identify the load balancing rule for the distributed
availability group.
Setting Value
Frontend IP address Use the same frontend IP address as the availability group.
Backend pool The pool that contains the virtual machines with the SQL Server
instances.
Protocol TCP
Port 5022 - The port for the distributed availability group endpoint
listener.
Repeat these steps for the load balancer on the other availability groups that participate
in the distributed availability groups.
If you have an Azure Network Security Group to restrict access, make sure that the allow
rules include:
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
Tip
This document shows you how to use PowerShell to do one of the following tasks:
An availability group listener is a virtual network name that clients connect to for
database access. On Azure Virtual Machines in a single subnet, a load balancer holds the
IP address for the listener. The load balancer routes traffic to the instance of SQL Server
that is listening on the probe port. Usually, an availability group uses an internal load
balancer. An Azure internal load balancer can host one or many IP addresses. Each IP
address uses a specific probe port.
The ability to assign multiple IP addresses to an internal load balancer is new to Azure
and is only available in the Resource Manager model. To complete this task, you need to
have a SQL Server availability group deployed on Azure Virtual Machines in the
Resource Manager model. Both SQL Server virtual machines must belong to the same
availability set. You can use the Microsoft template to automatically create the
availability group in Azure Resource Manager. This template automatically creates the
availability group, including the internal load balancer for you. If you prefer, you can
manually configure an Always On availability group.
To complete the steps in this article, your availability groups need to be already
configured.
7 Note
This article uses the Azure Az PowerShell module, which is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell
module, see Install Azure PowerShell. To learn how to migrate to the Az PowerShell
module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
Connect-AzAccount
If you have multiple subscriptions use the Set-AzContext cmdlet to select which
subscription your PowerShell session should use. To see what subscription the current
PowerShell session is using, run Get-AzContext. To see all your subscriptions, run Get-
AzSubscription.
PowerShell
If you are restricting access with an Azure Network Security Group, ensure that the allow
rules include the backend SQL Server VM IP addresses, and the load balancer floating IP
addresses for the AG listener and the cluster core IP address, if applicable.
The current Microsoft template for an availability group uses a basic load balancer with
basic IP addresses.
7 Note
You will need to configure a service endpoint if you use a standard load balancer
and Azure Storage for the cloud witness.
The examples in this article specify a standard load balancer. In the examples, the script
includes -sku Standard .
PowerShell
To create a basic load balancer, remove -sku Standard from the line that creates the
load balancer. For example:
PowerShell
7 Note
If you created your availability group with the Microsoft template, the internal load
balancer was already created.
The following PowerShell script creates an internal load balancer, configures the load-
balancing rules, and sets an IP address for the load balancer. To run the script, open
Windows PowerShell ISE, and then paste the script in the Script pane. Use Connect-
AzAccount to log in to PowerShell. If you have multiple Azure subscriptions, use Select-
AzSubscription to set the subscription.
PowerShell
# Connect-AzAccount
foreach($VMName in $VMNames)
$NICName = ($vm.NetworkProfile.NetworkInterfaces.Id.split('/') |
select -last 1)
$NIC.IpConfigurations[0].LoadBalancerBackendAddressPools = $BEPool
The front-end port is the port that applications use to connect to the SQL Server
instance. IP addresses for different availability groups can use the same front-end port.
7 Note
For SQL Server availability groups, each IP address requires a specific probe port.
For example, if one IP address on a load balancer uses probe port 59999, no other
IP addresses on that load balancer can use probe port 59999.
For information about load balancer limits, see Private front end IP per load
balancer under Networking Limits - Azure Resource Manager.
For information about availability group limits, see Restrictions (Availability
Groups).
The following script adds a new IP address to an existing load balancer. The ILB uses the
listener port for the load-balancing front-end port. This port can be the port that SQL
Server is listening on. For default instances of SQL Server, the port is 1433. The load-
balancing rule for an availability group requires a floating IP (direct server return) so the
back-end port is the same as the front-end port. Update the variables for your
environment.
PowerShell
# Connect-AzAccount
$count = $ILB.FrontendIpConfigurations.Count+1
$FrontEndConfigurationName ="FE_SQLAGILB_$count"
$LBProbeName = "ILBPROBE_$count"
$LBConfigrulename = "ILBCR_$count"
a. Use RDP to connect to the Azure virtual machine that hosts the primary replica.
c. Select the Networks node, and note the cluster network name. Use this name in
the $ClusterNetworkName variable in the PowerShell script. In the following image,
the cluster network name is Cluster Network 1:
2. Add the client access point. The client access point is the network name that
applications use to connect to the databases in an availability group.
a. In Failover Cluster Manager, expand the cluster name, and then select Roles.
b. On the Roles pane, right-click the availability group name, and then select Add
Resource > Client Access Point.
d. To finish creating the listener, select Next twice, and then select Finish. Don't
bring the listener or resource online at this point.
3. Take the cluster role for the availability group offline. In Failover Cluster Manager,
under Roles, right-click the role, and then select Stop Role.
a. Select the Resources tab, and then expand the client access point that you
created. The client access point is offline.
b. Right-click the IP resource, and then select Properties. Note the name of the IP
address, and use it in the $IPResourceName variable in the PowerShell script.
c. Under IP Address, select Static IP Address. Set the IP address as the same
address that you used when you set the load balancer address on the Azure portal.
5. Make the SQL Server availability group dependent on the client access point:
a. In Failover Cluster Manager, select Roles, and then select your availability group.
b. On the Resources tab, under Other Resources, right-click the availability group
resource, and then select Properties.
c. On the Dependencies tab, add the name of the client access point (the listener).
d. Select OK.
a. In Failover Cluster Manager, select Roles, and then select your availability group.
b. On the Resources tab, right-click the client access point under Server Name,
and then select Properties.
c. Select the Dependencies tab. Verify that the IP address is a dependency. If it
isn't, set a dependency on the IP address. If multiple resources are listed, verify that
the IP addresses have OR, not AND, dependencies. Then select OK.
Tip
You can validate that the dependencies are correctly configured. In Failover
Cluster Manager, go to Roles, right-click the availability group, select More
Actions, and then select Show Dependency Report. When the dependencies
are correctly configured, the availability group is dependent on the network
name, and the network name is dependent on the IP address.
a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.
$ListenerILBIP is the IP address that you created on the Azure load balancer
for the availability group listener. Find the $ListenerILBIP in the Failover
Cluster Manager on the same properties page as the SQL Server AG/FCI
Listener Resource Name.
balancer for the availability group listener, such as 59999. Any unused TCP
port is valid.
PowerShell
[int]$ListenerProbePort = <nnnnn>
Import-Module FailoverClusters
b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
7 Note
If your SQL Server instances are in separate regions, you need to run the
PowerShell script twice. The first time, use the $ListenerILBIP and
$ListenerProbePort values from the first region. The second time, use the
$ListenerILBIP and $ListenerProbePort values from the second region. The
cluster network name and the cluster IP resource name are also different for
each region.
8. Bring the cluster role for the availability group online. In Failover Cluster Manager,
under Roles, right-click the role, and then select Start Role.
If necessary, repeat the preceding steps to set the cluster parameters for the IP address
of the Windows Server failover cluster:
1. Get the IP address name of the Windows Server failover cluster. In Failover Cluster
Manager, under Cluster Core Resources, locate Server Name.
3. Copy the name of the IP address from Name. It might be Cluster IP Address.
a. Copy the following PowerShell script to one of your SQL Server instances.
Update the variables for your environment.
$ClusterCoreIP is the IP address that you created on the Azure load balancer
for the Windows Server failover cluster's core cluster resource. It's different
from the IP address for the availability group listener.
$ClusterProbePort is the port that you configured on the Azure load balancer
for the Windows Server failover cluster's health probe. It's different from the
probe for the availability group listener.
PowerShell
Import-Module FailoverClusters
b. Set the cluster parameters by running the PowerShell script on one of the cluster
nodes.
If any SQL resource is configured to use a port between 49152 and 65536 (the default
dynamic port range for TCP/IP), add an exclusion for each port. Such resources might
include:
Adding an exclusion will prevent other system processes from being dynamically
assigned to the same port. For this scenario, configure the following exclusions on all
cluster nodes:
It's important to configure the port exclusion when the port is not in use. Otherwise, the
command will fail with a message like "The process cannot access the file because it is
being used by another process."
To confirm that the exclusions are configured correctly,
use the following command: netsh int ipv4 show excludedportrange tcp .
2 Warning
The port for the availability group listener's health probe has to be different from
the port for the cluster core IP address's health probe. In these examples, the
listener port is 59999 and the cluster core IP address's health probe port is 58888.
Both ports require an "allow inbound" firewall rule.
3. You should now see the listener name that you created in Failover Cluster
Manager. Right-click the listener name and select Properties.
4. In the Port box, specify the port number for the availability group listener by using
the $EndpointPort you used earlier (1433 was the default), then select OK.
1. Use Remote Desktop Protocol (RDP) to connect to a SQL Server that is in the same
virtual network, but does not own the replica. It might be the other SQL Server in
the cluster.
2. Use sqlcmd utility to test the connection. For example, the following script
establishes a sqlcmd connection to the primary replica through the listener with
Windows authentication:
sqlcmd -S <listenerName> -E
If the listener is using a port other than the default port (1433), specify the port in
the connection string. For example, the following sqlcmd command connects to a
listener at port 1435:
sqlcmd -S <listenerName>,1435 -E
7 Note
Make sure that the port you specify is open on the firewall of both SQL Servers.
Both servers require an inbound rule for the TCP port that you use. For more
information, see Add or Edit Firewall Rule.
If you're on the secondary replica VM, and you're unable to connect to the listener, it's
possible the probe port was not configured correctly.
You can use the following script to validate the probe port is correctly configured for the
availability group:
PowerShell
Clear-Host
Get-ClusterResource `
| Get-ClusterParameter `
With an internal load balancer, you only access the listener from within the same
virtual network.
If you're restricting access with an Azure Network Security Group, ensure that the
allow rules include:
The backend SQL Server VM IP addresses
The load balancer floating IP addresses for the AG listener
The cluster core IP address, if applicable.
Create a service endpoint when using a standard load balancer with Azure Storage
for the cloud witness. For more information, see Grant access from a virtual
network.
PowerShell cmdlets
Use the following PowerShell cmdlets to create an internal load balancer for Azure
Virtual Machines.
Applies to:
SQL Server on Azure VM
Tip
On Azure virtual machines, clusters use a load balancer to hold an IP address that needs
to be on one cluster node at a time. In this solution, the load balancer holds the IP
address for the virtual network name (VNN) listener for the Always On availability group
when the SQL Server VMs are in a single subnet.
This article teaches you to configure a load balancer by using the Azure Load Balancer
service. The load balancer will route traffic to your availability group listener with SQL
Server on Azure VMs for high availability and disaster recovery (HADR).
For an alternative connectivity option for customers who are on SQL Server 2019 CU8
and later, consider a distributed network name (DNN) listener instead. A DNN listener
offers simplified configuration and improved failover.
Prerequisites
Before you complete the steps in this article, you should already have:
Decided that Azure Load Balancer is the appropriate connectivity option for your
availability group.
Installed the latest version of PowerShell.
Internal: An internal load balancer can be accessed only from private resources
that are internal to the network. When you configure an internal load balancer and
its rules, use the same IP address as the availability group listener for the frontend
IP address.
External: An external load balancer can route traffic from the public to internal
resources. When you configure an external load balancer, you can't use the same
IP address as the availability group listener because the listener IP address can't be
a public IP address.
) Important
On September 30, 2025, the Basic SKU for Azure Load Balancer will be retired. For
more information, see the official announcement . If you're currently using Basic
Load Balancer, upgrade to Standard Load Balancer before the retirement date. For
guidance, review Upgrade Load Balancer.
1. In the Azure portal , go to the resource group that contains the virtual machines.
2. Select Add. Search Azure Marketplace for load balancer. Select Load Balancer.
3. Select Create.
4. In Create load balancer, on the Basics tab, set up the load balancer by using the
following values:
5. Select Add to associate the backend pool with the availability set that contains the
VMs.
6. Under Virtual machine, choose the virtual machines that will participate as cluster
nodes. Be sure to include all virtual machines that will host the availability group.
Add only the primary IP address of each VM. Don't add any secondary IP
addresses.
3. Select Add.
2. Select Add.
3. Set these parameters:
4. Select Save.
Update the variables in the following script with values from your environment.
Remove the angle brackets ( < and > ) from the script.
PowerShell
$ILBIP = "<n.n.n.n>"
[int]$ProbePort = <nnnnn>
Import-Module FailoverClusters
The following table describes the values that you need to update:
Variable Value
Variable Value
ClusterNetworkName The name of the Windows Server failover cluster for the network. In
Failover Cluster Manager > Networks, right-click the network and
select Properties. The correct value is under Name on the General
tab.
IPResourceName The resource name for the IP address of the AG listener. In Failover
Cluster Manager > Roles, under the availability group role, under
Server Name, right-click the IP address resource and select
Properties. The correct value is under Name on the General tab.
ILBIP The IP address of the internal load balancer. This address is configured
in the Azure portal as the frontend address of the internal load
balancer. This is the same IP address as the availability group listener.
You can find it in Failover Cluster Manager, on the same properties
page where you located the value for IPResourceName .
ProbePort The probe port that you configured in the health probe of the load
balancer. Any unused TCP port is valid.
SubnetMask The subnet mask for the cluster parameter. It must be the TCP/IP
broadcast address: 255.255.255.255 .
After you set the cluster probe, you can see all the cluster parameters in PowerShell.
Run this script:
PowerShell
If your client doesn't support the MultiSubnetFailover parameter, you can modify the
RegisterAllProvidersIP and HostRecordTTL settings to prevent connectivity delays after
failover.
Use PowerShell to modify the RegisterAllProvidersIp and HostRecordTTL settings:
PowerShell
To learn more, see the documentation about listener connection timeout in SQL Server.
Tip
clients can then reconnect more quickly. As such, reducing the HostRecordTTL
setting might increase traffic to the DNS servers.
Test failover
Test failover of the clustered resource to validate cluster functionality:
1. Open SQL Server Management Studio and connect to your availability group
listener.
2. In Object Explorer, expand Always On Availability Group.
3. Right-click the availability group and select Failover.
4. Follow the wizard prompts to fail over the availability group to a secondary replica.
Failover succeeds when the replicas switch roles and are both synchronized.
Test connectivity
To test connectivity, sign in to another virtual machine in the same virtual network. Open
SQL Server Management Studio and connect to the availability group listener.
7 Note
If you need to, you can download SQL Server Management Studio.
Next steps
After the VNN is created, consider optimizing the cluster settings for SQL Server VMs.
Applies to:
SQL Server on Azure VM
Tip
With SQL Server on Azure VMs in a single subnet, the distributed network name (DNN)
routes traffic to the appropriate clustered resource. It provides an easier way to connect
to an Always On availability group (AG) than the virtual network name (VNN) listener,
without the need for an Azure Load Balancer.
This article teaches you to configure a DNN listener to replace the VNN listener and
route traffic to your availability group with SQL Server on Azure VMs for high availability
and disaster recovery (HADR).
For an alternative connectivity option, consider a VNN listener and Azure Load Balancer
instead.
Overview
A distributed network name (DNN) listener replaces the traditional virtual network name
(VNN) availability group listener when used with Always On availability groups on SQL
Server VMs. This negates the need for an Azure Load Balancer to route traffic,
simplifying deployment, maintenance, and improving failover.
Use the DNN listener to replace an existing VNN listener, or alternatively, use it in
conjunction with an existing VNN listener so that your availability group has two distinct
connection points - one using the VNN listener name (and port if non-default), and one
using the DNN listener name and port.
U Caution
The routing behavior when using a DNN differs when using a VNN. Do not use port
1433. To learn more, see the Port consideration section later in this article.
Prerequisites
Before you complete the steps in this article, you should already have:
SQL Server starting with either SQL Server 2019 CU8 and later, SQL Server 2017
CU25 and later, or SQL Server 2016 SP3 and later on Windows Server 2016
and later.
Decided that the distributed network name is the appropriate connectivity option
for your HADR solution.
Configured your Always On availability group.
Installed the latest version of PowerShell.
Identified the unique port that you will use for the DNN listener. The port used for
a DNN listener must be unique across all replicas of the availability group or
failover cluster instance. No other connection can share the same port.
Create script
Use PowerShell to create the distributed network name (DNN) resource and associate it
with your availability group.
PowerShell
param (
[Parameter(Mandatory=$true)][string]$Ag,
[Parameter(Mandatory=$true)][string]$Dns,
[Parameter(Mandatory=$true)][string]$Port
Write-Host "Add a DNN listener for availability group $Ag with DNS name
$Dns and port $Port"
$ErrorActionPreference = "Stop"
# create the DNN resource with the port as the resource name
else
$DepStr = "[$Port]"
Write-Host "$DepStr"
Execute script
To create the DNN listener, execute the script passing in parameters for the name of the
availability group, listener name, and port.
For example, assuming an availability group name of ag1 , listener name of dnnlsnr , and
listener port as 6789 , follow these steps:
Verify listener
Use either SQL Server Management Studio or Transact-SQL to confirm your DNN
listener is created successfully.
Transact-SQL
Use Transact-SQL to view the status of the DNN listener:
SQL
SELECT * FROM SYS.AVAILABILITY_GROUP_LISTENERS
The following is an example of a connection string for listener name DNN_Listener and
port 6789:
DataSource=DNN_Listener,6789;MultiSubnetFailover=True
Test failover
Test failover of the availability group to ensure functionality.
1. Connect to the DNN listener or one of the replicas by using SQL Server
Management Studio (SSMS).
2. Expand Always On Availability Group in Object Explorer.
3. Right-click the availability group and choose Failover to open the Failover Wizard.
4. Follow the prompts to choose a failover target and fail the availability group over
to a secondary replica.
5. Confirm the database is in a synchronized state on the new primary replica.
6. (Optional) Fail back to the original primary, or another secondary replica.
Test connectivity
Test the connectivity to your DNN listener with these steps:
Limitations
DNN Listeners MUST be configured with a unique port. The port cannot be shared
with any other connection on any replica.
The client connecting to the DNN listener must support the
MultiSubnetFailover=True parameter in the connection string.
There might be additional considerations when you're working with other SQL
Server features and an availability group with a DNN. For more information, see AG
with DNN interoperability.
Port considerations
DNN listeners are designed to listen on all IP addresses, but on a specific, unique port.
The DNS entry for the listener name should resolve to the addresses of all replicas in the
availability group. This is done automatically with the PowerShell script provided in the
Create Script section. Since DNN listeners accept connections on all IP addresses, it is
critical that the listener port be unique, and not in use by any other replica in the
availability group. Since SQL Server listens on port 1433 by default, either directly or via
the SQL Browser service, using port 1433 for the DNN listener is strongly discouraged.
If the listener port chosen for the VNN listener is between 49,152 and 65,536 (the
default dynamic port range for TCP/IP, add an exclusion for this. Doing so will prevent
other systems from being dynamically assigned the same port.
Next steps
Once the availability group is deployed, consider optimizing the HADR settings for SQL
Server on Azure VMs.
Applies to:
SQL Server on Azure VM
Tip
There are certain SQL Server features that rely on a hard-coded virtual network name
(VNN). As such, when using the distributed network name (DNN) listener with your
Always On availability group and SQL Server on Azure VMs in a single subnet, there may
be some additional considerations.
This article details SQL Server features and interoperability with the availability group
DNN listener.
Behavior differences
There are some behavior differences between the functionality of the VNN listener and
DNN listener that are important to note:
Failover time: Failover time is faster when using a DNN listener since there is no
need to wait for the network load balancer to detect the failure event and change
its routing.
Existing connections: Connections made to a specific database within a failing-over
availability group will close, but other connections to the primary replica will
remain open since the DNN stays online during the failover process. This is
different than a traditional VNN environment where all connections to the primary
replica typically close when the availability group fails over, the listener goes
offline, and the primary replica transitions to the secondary role. When using a
DNN listener, you may need to adjust application connection strings to ensure that
connections are redirected to the new primary replica upon failover.
Open transactions: Open transactions against a database in a failing-over
availability group will close and roll back, and you need to manually reconnect. For
example, in SQL Server Management Studio, close the query window and open a
new one.
Client drivers
For ODBC, OLEDB, ADO.NET, JDBC, PHP, and Node.js drivers, users need to explicitly
specify the DNN listener name and port as the server name in the connection string. To
ensure rapid connectivity upon failover, add MultiSubnetFailover=True to the
connection string if the SQL client supports it.
Tools
Users of SQL Server Management Studio, sqlcmd, Azure Data Studio, and SQL Server
Data Tools need to explicitly specify the DNN listener name and port as the server name
in the connection string to connect to the listener.
Creating the DNN listener via the SQL Server Management Studio (SSMS) GUI is
currently not supported.
In this configuration, the mirroring endpoint URL for the FCI replica needs to use the FCI
DNN. Likewise, if the FCI is used as a read-only replica, the read-only routing to the FCI
replica needs to use the FCI DNN.
The format for the mirroring endpoint is: ENDPOINT_URL = 'TCP://<FCI DNN DNS name>:
<mirroring endpoint port>' .
For example, if your FCI DNN DNS name is dnnlsnr , and 5022 is the port of the FCI's
mirroring endpoint, the Transact-SQL (T-SQL) code snippet to create the endpoint URL
looks like:
SQL
ENDPOINT_URL = 'TCP://dnnlsnr:5022'
Likewise, the format for the read-only routing URL is: TCP://<FCI DNN DNS name>:<SQL
Server instance port> .
For example, if your DNN DNS name is dnnlsnr , and 1444 is the port used by the read-
only target SQL Server FCI, the T-SQL code snippet to create the read-only routing URL
looks like:
SQL
READ_ONLY_ROUTING_URL = 'TCP://dnnlsnr:1444'
You can omit the port in the URL if it is the default 1433 port. For a named instance,
configure a static port for the named instance and specify it in the read-only routing
URL.
Replication
Transactional, Merge, and Snapshot Replication all support replacing the VNN listener
with the DNN listener and port in replication objects that connect to the listener.
For more information on how to use replication with availability groups, see Publisher
and AG, Subscriber and AG, and Distributor and AG.
MSDTC
Both local and clustered MSDTC are supported but MSDTC uses a dynamic port, which
requires a standard Azure Load Balancer to configure the HA port. As such, either the
VM must use a standard IP reservation, or it cannot be exposed to the internet.
Define two rules, one for the RPC Endpoint Mapper port 135, and one for the real
MSDTC port. After failover, modify the LB rule to the new MSDTC port after it changes
on the new node.
If the MSDTC is local, be sure to allow outbound communication.
Distributed query
Distributed query relies on a linked server, which can be configured using the AG DNN
listener and port. If the port is not 1433, choose the Use other data source option in
SQL Server Management Studio (SSMS) when configuring your linked server.
FileStream
Filestream is supported but not for scenarios where users access the scoped file share by
using the Windows File API.
Filetable
Filetable is supported but not for scenarios where users access the scoped file share by
using the Windows File API.
Linked servers
Configure the linked server using the AG DNN listener name and port. If the port is not
1433, choose the Use other data source option in SQL Server Management Studio
(SSMS) when configuring your linked server.
What is the expected failover time when the DNN listener is used?
For DNN listener, the failover time will be just the AG failover time, without any
additional time (like probe time when you're using Azure Load Balancer).
Is there any version requirement for SQL clients to support DNN with OLEDB and
ODBC?
SQL Server does not require any configuration change to use DNN, but some SQL
Server features might require more consideration.
Yes. The cluster binds the DNN in DNS with the physical IP addresses of all replicas
in the availability group regardless of the subnet. The SQL client tries all IP
addresses of the DNS name regardless of the subnet.
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
This article describes how to prepare Azure virtual machines (VMs) to use them with a
SQL Server failover cluster instance (FCI). Configuration settings vary depending on the
FCI storage solution, so validate that you're choosing the correct configuration to suit
your environment and business.
To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.
7 Note
It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.
Prerequisites
A Microsoft Azure subscription. Get started with a free Azure account .
A Windows domain on Azure virtual machines or an on-premises active directory
extended to Azure with virtual network pairing.
An account that has permissions to create objects on Azure virtual machines and in
Active Directory.
An Azure virtual network and one or more subnets with enough IP address space
for these components:
Both virtual machines
An IP address for the Windows failover cluster
An IP address for each FCI
DNS configured on the Azure network, pointing to the domain controllers.
Choose VM availability
The failover cluster feature requires virtual machines to be placed in an availability set or
an availability zone.
Carefully select the VM availability option that matches your intended cluster
configuration:
Azure shared disks: the availability option varies if you're using Premium SSD or
UltraDisk:
Premium SSD Zone Redundant Storage (ZRS):
Availability Zone in different
zones. Premium SSD ZRS replicates your Azure managed disk synchronously
across three Azure availability zones in the selected region. VMs part of failover
cluster can be placed in different availability zones, helping you achieve a zone-
redundant SQL Server FCI that provides a VM availability SLA of 99.99%. Disk
latency for ZRS is higher due to the cross-zonal copy of data.
Premium SSD Locally Redundant Storage (LRS):
Availability Set in different
fault/update domains for Premium SSD LRS. You can also choose to place the
VMs inside a proximity placement group to locate them closer to each other.
Combining availability set and proximity placement group provides the lowest
latency for shared disks as data is replicated locally within one data center and
provides VM availability SLA of 99.95%.
Ultra Disk Locally Redundant Storage (LRS):
Availability zone but the VMs must
be placed in the same availability zone. Ultra disks offers lowest disk latency and
is best for IO intensive workloads. Since all VMs part of the FCI have be in the
same availability zone, the VM availability is only 99.9%.
Premium file shares: Availability set or Availability Zone.
Storage Spaces Direct: Availability Set.
) Important
You can't set or change the availability set after you've created a virtual machine.
Subnets
For SQL Server on Azure VMs, you have the option to deploy your SQL Server VMs to a
single subnet, or to multiple subnets.
Deploying your VMs to multiple subnets leverages the cluster OR dependency for IP
addresses and matches the on-premises experience when connecting to your failover
cluster instance. The multi-subnet approach is recommend for SQL Server on Azure VMs
for simpler manageability, and faster failover times.
If you deploy your SQL Server VMs to multiple subnets, follow the steps in this section
to create your virtual networks with additional subnets, and then once the SQL Server
VMs are created, assign secondary IP addresses within those subnets to the VMs.
Deploying your SQL Server VMs to a single subnet does not require any additional
network configuration.
Single subnet
Place both virtual machines in a single subnet that has enough IP addresses for
both virtual machines and all FCIs that you might eventually install to the cluster.
This approach requires an extra component to route connections to your FCI, such
as an Azure Load Balancer or a distributed network name (DNN).
If you choose to deploy your SQL Server VMs to a single subnet review the
differences between the Azure Load Balancer and DNN connectivity options and
decide which option works best for you before preparing the rest of your
environment for your FCI.
Deploying your SQL Server VMs to a single subnet does not require any additional
network configuration.
Configure DNS
Configure your virtual network to use your DNS server. First, identify the DNS IP address,
and then add it to your virtual network.
To identify the IP address of the DNS server VM in the Azure portal, follow these steps:
1. Go to your resource group in the Azure portal and select the DNS server VM.
2. On the VM page, choose Networking in the Settings pane.
3. Note the NIC Private IP address as this is the IP address of the DNS server. In the
example image, the private IP address is 10.38.0.4.
1. Go to your resource group in the Azure portal , and select your virtual network.
2. Select DNS servers under the Settings pane and then select Custom.
3. Enter the private IP address you identified previously in the IP Address field, such
as 10.38.0.4 , or provide the internal IP address of your internal DNS server.
4. Select Save.
Create the virtual machines
After you've configured your VM virtual network and chosen VM availability, you're
ready to create your virtual machines. You can choose to use an Azure Marketplace
image that does or doesn't have SQL Server already installed on it. However, if you
choose an image for SQL Server on Azure VMs, you'll need to uninstall SQL Server from
the virtual machine before configuring the failover cluster instance.
NIC considerations
On an Azure VM guest failover cluster, we recommend a single NIC per server (cluster
node). Azure networking has physical redundancy, which makes additional NICs
unnecessary on an Azure IaaS VM guest cluster. Although the cluster validation report
will issue a warning that the nodes are only reachable on a single network, this warning
can be safely ignored on Azure IaaS VM guest failover clusters.
In the same Azure resource group as your availability set, if you're using availability
sets.
On the same virtual network as your domain controller and DNS server or on a
virtual network that has suitable connectivity to your domain controller.
In the Azure availability set or availability zone.
You can create an Azure virtual machine by using an image with or without SQL Server
preinstalled to it. If you choose the SQL Server image, you'll need to manually uninstall
the SQL Server instance before installing the failover cluster instance.
Assign secondary IP addresses to each SQL Server VM to use for the failover cluster
instance network name, and for Windows Server 2016 and earlier, assign secondary IP
addresses to each SQL Server VM for the cluster network name as well. Doing this
negates the need for an Azure Load Balancer, as is the requirement in a single subnet
environment.
On Windows Server 2016 and earlier, you need to assign an additional secondary IP
address to each SQL Server VM to use for the windows cluster IP since the cluster uses
the Cluster Network Name rather than the default distributed network name (DNN)
introduced in Windows Server 2019. With a DNN, the cluster name object (CNO) is
automatically registered with the IP addresses for all the nodes of the cluster,
eliminating the need for a dedicated windows cluster IP address.
If you're on Windows Server 2016 and prior, follow the steps in this section to assign a
secondary IP address to each SQL Server VM for both the FCI network name, and the
cluster.
If you're on Windows Server 2019 or later, only assign a secondary IP address for the FCI
network name, and skip the steps to assign a windows cluster IP, unless you plan to
configure your cluster with a virtual network name (VNN), in which case assign both IP
addresses to each SQL Server VM as you would for Windows Server 2016.
1. Go to your resource group in the Azure portal and select the first SQL Server
VM.
2. Select Networking in the Settings pane, and then select the Network Interface:
3. On the Network Interface page, select IP configurations in the Settings pane and
then choose + Add to add an additional IP address:
1. Connect to the virtual machine by using RDP. When you first connect to a virtual
machine by using RDP, a prompt asks you if you want to allow the PC to be
discoverable on the network. Select Yes.
2. Open Programs and Features in the Control Panel.
3. In Programs and Features, right-click Microsoft SQL Server 201_ (64-bit) and
select Uninstall/Change.
4. Select Remove.
5. Select the default instance.
6. Remove all features under Database Engine Services, Analysis Services and
Reporting Services - Native. Don't remove anything under SharedFeatures. You'll
see something like the following screenshot:
If you use a load balancer for single subnet scenario, you'll also need to open the port
that the health probe uses. By default, the health probe uses port 59999, but it can be
any TCP port that you specify when you create the load balancer.
This table details the ports that you might need to open, depending on your FCI
configuration:
SQL TCP Normal port for default instances of SQL Server. If you used an image from
Server 1433 the gallery, this port is automatically opened.
Health TCP Any open TCP port. Configure the load balancer health probe and the cluster
probe 59999 to use this port.
share 445
Used by: FCI with Premium file share.
Next steps
Now that you've prepared your virtual machine environment, you're ready to configure
your failover cluster instance.
Choose one of the following guides to configure the FCI environment that's appropriate
for your business:
Applies to:
SQL Server on Azure VM
Tip
This article explains how to create a failover cluster instance (FCI) by using Azure shared
disks with SQL Server on Azure Virtual Machines (VMs).
To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.
7 Note
It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.
Prerequisites
Before you complete the instructions in this article, you should already have:
To attach the shared disk to your SQL Server VMs, follow these steps:
1. Select the VM in the Azure portal that you will attach the shared disk to.
2. Select Disks in the Settings pane.
3. Select Attach existing disks to attach the shared disk to the VM.
4. Choose the shared disk from the Disk name drop-down.
5. Select Save.
6. Repeat these steps for every cluster node SQL Server VM.
After a few moments, the shared data disk is attached to the VM and appears in the list
of Data disks for that VM.
To initialize the disks for your SQL Server VM, follow these steps:
Configure quorum
Since the disk witness is the most resilient quorum option, and the FCI solution uses
Azure shared disks, it's recommended to configure a disk witness as the quorum
solution.
If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.
Validate cluster
Validate the cluster on one of the virtual machines by using the Failover Cluster Manager
UI or PowerShell.
1. Under Server Manager, select Tools, and then select Failover Cluster Manager.
2. Under Failover Cluster Manager, select Action, and then select Validate
Configuration.
3. Select Next.
4. Under Select Servers or a Cluster, enter the names of both virtual machines.
5. Under Testing options, select Run only tests I select.
6. Select Next.
7. Under Test Selection, select all tests except Storage.
8. Select Next.
9. Under Confirmation, select Next. The Validate a Configuration wizard runs the
validation tests.
To validate the cluster by using PowerShell, run the following script from an
administrator PowerShell session on one of the virtual machines:
PowerShell
5. Choose the Azure shared disk in the Add Disks to a Cluster window. Select OK.
6. After the shared disk is added to the cluster, you will see it in the Failover Cluster
Manager.
1. Connect to the first virtual machine by using Remote Desktop Protocol (RDP).
2. In Failover Cluster Manager, make sure that all core cluster resources are on the
first virtual machine. If necessary, move the disks to that virtual machine.
3. If the version of the operating system is Windows Server 2019 and the Windows
Cluster was created using the default Distributed Network Name (DNN) , then
the FCI installation for SQL Server 2017 and below will fail with the error The given
key was not present in the dictionary .
During installation, SQL Server setup queries for the existing Virtual Network Name
(VNN) and doesn't recognize the Windows Cluster DNN. The issue has been fixed
in SQL Server 2019 setup. For SQL Server 2017 and below, follow these steps to
avoid the installation error:
4. Locate the installation media. If the virtual machine uses one of the Azure
Marketplace images, the media is located at C:\SQLServer_<version number>_Full .
5. Select Setup.
7. Select New SQL Server failover cluster installation. Follow the instructions in the
wizard to install the SQL Server FCI.
8. On the Cluster Disk Selection page, select all the shared disks that were attached
to the VM.
9. On the Cluster Network Configuration page, the IP you provide varies depending
on if your SQL Server VMs were deployed to a single subnet, or multiple subnets.
a. For a single subnet environment, provide the IP address that you plan to add
to the Azure Load Balancer
b. For a multi-subnet environment, provide the secondary IP address in the
subnet of the first SQL Server VM that you previously designated as the IP
address of the failover cluster instance network name:
10. On the Database Engine Configuration page, ensure the database directories are
on the Azure shared disk(s).
11. After you complete the instructions in the wizard, setup installs the SQL Server FCI
on the first node.
12. After FCI installation succeeds on the first node, connect to the second node by
using RDP.
13. Open the SQL Server Installation Center, and then select Installation.
14. Select Add node to a SQL Server failover cluster. Follow the instructions in the
wizard to install SQL Server and add the node to the FCI.
16. After you complete the instructions in the wizard, setup adds the second SQL
Server FCI node.
17. Repeat these steps on any other SQL Server VMs you want to participate in the
SQL Server failover cluster instance.
7 Note
Azure Marketplace gallery images come with SQL Server Management Studio
installed. If you didn't use a marketplace image Download SQL Server
Management Studio (SSMS).
Register with SQL IaaS Agent extension
To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent
extension. Note that only limited functionality will be available on SQL VMs that have
failover clustered instances of SQL Server (FCIs).
If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.
PowerShell
Configure connectivity
If you deployed your SQL Server VMs in multiple subnets, skip this step. If you deployed
your SQL Server VMs to a single subnet, then you'll need to configure an additional
component to route traffic to your FCI. You can configure a virtual network name (VNN)
with an Azure Load Balancer, or a distributed network name for a failover cluster
instance. Review the differences between the two and then deploy either a distributed
network name or a virtual network name and Azure Load Balancer for your failover
cluster instance.
Limitations
Azure virtual machines support Microsoft Distributed Transaction Coordinator
(MSDTC) on Windows Server 2019 with storage on CSVs and a standard load
balancer. MSDTC is not supported on Windows Server 2016 and earlier.
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal
management. See the table of benefits.
Next steps
If Azure shared disks are not the appropriate FCI storage solution for you, consider
creating your FCI using premium file shares or Storage Spaces Direct instead.
Applies to:
SQL Server on Azure VM
Tip
This article explains how to create a failover cluster instance (FCI) by using Storage
Spaces Direct with SQL Server on Azure Virtual Machines (VMs). Storage Spaces Direct
acts as a software-based virtual storage area network (VSAN) that synchronizes the
storage (data disks) between the nodes (Azure VMs) in a Windows cluster.
To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.
7 Note
It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.
Overview
Storage Spaces Direct (S2D) supports two types of architectures: converged and
hyperconverged. A hyperconverged infrastructure places the storage on the same
servers that host the clustered application, so that storage is on each SQL Server FCI
node.
The following diagram shows the complete solution, which uses hyperconverged
Storage Spaces Direct with SQL Server on Azure VMs:
The preceding diagram shows the following resources in the same resource group:
Two virtual machines in a Windows Server failover cluster. When a virtual machine
is in a failover cluster, it's also called a cluster node or node.
Each virtual machine has two or more data disks.
Storage Spaces Direct synchronizes the data on the data disks and presents the
synchronized storage as a storage pool.
The storage pool presents a Cluster Shared Volume (CSV) to the failover cluster.
The SQL Server FCI cluster role uses the CSV for the data drives.
An Azure load balancer to hold the IP address for the SQL Server FCI for a single
subnet scenario.
An Azure availability set holds all the resources.
7 Note
You can create this entire solution in Azure from a template. An example of a
template is available on the GitHub Azure quickstart templates page. This
example isn't designed or tested for any specific workload. You can run the
template to create a SQL Server FCI with Storage Spaces Direct storage connected
to your domain. You can evaluate the template and modify it for your purposes.
Prerequisites
Before you complete the instructions in this article, you should already have:
An Azure subscription. Get started with a free Azure account .
Two or more prepared Windows Azure virtual machines in an availability set.
An account that has permissions to create objects on both Azure virtual machines
and in Active Directory.
The latest version of PowerShell.
Configure quorum
Although the disk witness is the most resilient quorum option, it's not supported for
failover cluster instances configured with Storage Spaces Direct. As such, the cloud
witness is the recommended quorum solution for this type of cluster configuration for
SQL Server on Azure VMs.
If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.
To validate the cluster by using the UI, do the following on one of the virtual machines:
1. Under Server Manager, select Tools, and then select Failover Cluster Manager.
2. Under Failover Cluster Manager, select Action, and then select Validate
Configuration.
3. Select Next.
4. Under Select Servers or a Cluster, enter the names of both virtual machines.
6. Select Next.
7. Under Test Selection, select all tests except for Storage, as shown here:
8. Select Next.
To validate the cluster by using PowerShell, run the following script from an
administrator PowerShell session on one of the virtual machines:
PowerShell
Add storage
The disks for Storage Spaces Direct need to be empty. They can't contain partitions or
other data. To clean the disks, follow the instructions in Deploy Storage Spaces Direct.
PowerShell
Enable-ClusterS2D
In Failover Cluster Manager, you can now see the storage pool.
2. Create a volume.
Storage Spaces Direct automatically creates a storage pool when you enable it.
You're now ready to create a volume. The PowerShell cmdlet New-Volume
automates the volume creation process. This process includes formatting, adding
the volume to the cluster, and creating a CSV. This example creates an 800
gigabyte (GB) CSV:
PowerShell
2. In Failover Cluster Manager, make sure all core cluster resources are on the first
virtual machine. If necessary, move all resources to that virtual machine.
3. If the version of the operating system is Windows Server 2019 and the Windows
Cluster was created using the default Distributed Network Name (DNN) , then
the FCI installation for SQL Server 2017 and below will fail with the error The given
key was not present in the dictionary .
During installation, SQL Server setup queries for the existing Virtual Network Name
(VNN) and doesn't recognize the Windows Cluster DNN. The issue has been fixed
in SQL Server 2019 setup. For SQL Server 2017 and below, follow these steps to
avoid the installation error:
6. Select New SQL Server failover cluster installation. Follow the instructions in the
wizard to install the SQL Server FCI.
7. On the Cluster Network Configuration page, the IP you provide varies depending
on if your SQL Server VMs were deployed to a single subnet, or multiple subnets.
a. For a single subnet environment, provide the IP address that you plan to add
to the Azure Load Balancer
b. For a multi-subnet environment, provide the secondary IP address in the
subnet of the first SQL Server VM that you previously designated as the IP
address of the failover cluster instance network name:
10. After FCI installation succeeds on the first node, connect to the second node by
using RDP.
12. Select Add node to a SQL Server failover cluster. Follow the instructions in the
wizard to install SQL Server and add the node to the FCI.
14. After you complete the instructions in the wizard, setup adds the second SQL
Server FCI node.
15. Repeat these steps on any other nodes that you want to add to the SQL Server
failover cluster instance.
7 Note
Azure Marketplace gallery images come with SQL Server Management Studio
installed. If you didn't use a marketplace image Download SQL Server
Management Studio (SSMS).
Register with SQL IaaS Agent extension
To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent
extension. Note that only limited functionality will be available on SQL VMs that have
failover clustered instances of SQL Server (FCIs).
If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.
PowerShell
Configure connectivity
If you deployed your SQL Server VMs in multiple subnets, skip this step. If you deployed
your SQL Server VMs to a single subnet, then you'll need to configure an additional
component to route traffic to your FCI. You can configure a virtual network name (VNN)
with an Azure Load Balancer, or a distributed network name for a failover cluster
instance. Review the differences between the two and then deploy either a distributed
network name or a virtual network name and Azure Load Balancer for your failover
cluster instance.
Limitations
Azure virtual machines support Microsoft Distributed Transaction Coordinator
(MSDTC) on Windows Server 2019 with storage on CSVs and a standard load
balancer. MSDTC is not supported on Windows Server 2016 and earlier.
Disks that have been attached as NTFS-formatted disks can be used with Storage
Spaces Direct only if the disk eligibility option is unchecked, or cleared, when
storage is being added to the cluster.
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal
management. See the table of benefits.
Failover cluster instances using Storage Spaces Direct as the shared storage do not
support using a disk witness for the quorum of the cluster. Use a cloud witness
instead.
Next steps
If Storage Spaces Direct isn't the appropriate FCI storage solution for you, consider
creating your FCI by using Azure shared disks or Premium File Shares instead.
Applies to:
SQL Server on Azure VM
Tip
This article explains how to create a failover cluster instance (FCI) with SQL Server on
Azure Virtual Machines (VMs) by using a premium file share.
Premium file shares are SSD backed and provide consistently low-latency file shares that
are fully supported for use with failover cluster instances for SQL Server 2012 or later on
Windows Server 2012 or later. Premium file shares give you greater flexibility, allowing
you to resize and scale a file share without any downtime.
To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best
practices.
7 Note
It's now possible to lift and shift your failover cluster instance solution to SQL
Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to
learn more.
Prerequisites
Before you complete the instructions in this article, you should already have:
An Azure subscription.
An account that has permissions to create objects on both Azure virtual machines
and in Active Directory.
Two or more prepared Windows Azure virtual machines in an availability set or
different availability zones.
A premium file share to be used as the clustered drive, based on the storage quota
of your database for your data files.
The latest version of PowerShell.
2. Go to File shares under Data storage, and then select the premium file share you
want to use for your SQL storage.
3. Select Connect to bring up the connection string for your file share.
4. In the drop-down list, select the drive letter you want to use, choose Storage
account key as the authentication method, and then copy the code block to a text
editor, such as Notepad.
5. Use Remote Desktop Protocol (RDP) to connect to the SQL Server VM with the
account that your SQL Server FCI will use for the service account.
7. Run the command that you copied earlier to your text editor from the File share
portal.
8. Go to the share by using either File Explorer or the Run dialog box (Windows + R
on your keyboard). Use the network path
\\storageaccountname.file.core.windows.net\filesharename . For example,
\\sqlvmstorageaccount.file.core.windows.net\sqlpremiumfileshare
9. Create at least one folder on the newly connected file share to place your SQL data
files into.
10. Repeat these steps on each SQL Server VM that will participate in the cluster.
) Important
Consider using a separate file share for backup files to save the input/output
operations per second (IOPS) and space capacity of this share for data and log files.
You can use either a Premium or Standard File Share for backup files.
Configure quorum
The cloud witness is the recommended quorum solution for this type of cluster
configuration for SQL Server on Azure VMs.
If you have an even number of votes in the cluster, configure the quorum solution that
best suits your business needs. For more information, see Quorum with SQL Server VMs.
Validate cluster
Validate the cluster on one of the virtual machines by using the Failover Cluster Manager
UI or PowerShell.
To validate the cluster by using the UI, do the following on one of the virtual machines:
1. Under Server Manager, select Tools, and then select Failover Cluster Manager.
2. Under Failover Cluster Manager, select Action, and then select Validate
Configuration.
3. Select Next.
4. Under Select Servers or a Cluster, enter the names of both virtual machines.
6. Select Next.
7. Under Test Selection, select all tests except for Storage and Storage Spaces Direct,
as shown here:
8. Select Next.
9. Under Confirmation, select Next. The Validate a Configuration wizard runs the
validation tests.
To validate the cluster by using PowerShell, run the following script from an
administrator PowerShell session on one of the virtual machines:
PowerShell
2. In Failover Cluster Manager, make sure that all the core cluster resources are on
the first virtual machine. If necessary, move all resources to this virtual machine.
3. If the version of the operating system is Windows Server 2019 and the Windows
Cluster was created using the default Distributed Network Name (DNN) , then
the FCI installation for SQL Server 2017 and below will fail with the error The given
key was not present in the dictionary .
During installation, SQL Server setup queries for the existing Virtual Network Name
(VNN) and doesn't recognize the Windows Cluster DNN. The issue has been fixed
in SQL Server 2019 setup. For SQL Server 2017 and below, follow these steps to
avoid the installation error:
4. Locate the installation media. If the virtual machine uses one of the Azure
Marketplace images, the media is located at C:\SQLServer_<version number>_Full .
5. Select Setup.
7. Select New SQL Server failover cluster installation, and then follow the
instructions in the wizard to install the SQL Server FCI.
8. On the Cluster Network Configuration page, the IP you provide varies depending
on if your SQL Server VMs were deployed to a single subnet, or multiple subnets.
a. For a single subnet environment, provide the IP address that you plan to add
to the Azure Load Balancer
b. For a multi-subnet environment, provide the secondary IP address in the
subnet of the first SQL Server VM that you previously designated as the IP
address of the failover cluster instance network name:
9. In Database Engine Configuration, the data directories need to be on the
premium file share. Enter the full path of the share, in this format:
\\storageaccountname.file.core.windows.net\filesharename\foldername . A warning
appears, telling you that you've specified a file server as the data directory. This
warning is expected. Ensure that the user account you used to access the VM via
RDP when you persisted the file share is the same account that the SQL Server
service uses to avoid possible failures.
10. After you complete the steps in the wizard, Setup installs a SQL Server FCI on the
first node.
11. After FCI installation succeeds on the first node, connect to the second node by
using RDP.
12. Open the SQL Server Installation Center, and then select Installation.
13. Select Add node to a SQL Server failover cluster. Follow the instructions in the
wizard to install SQL Server and add the node to the FCI.
After selecting Next in Cluster Network Configuration, setup shows a dialog box
indicating that SQL Server Setup detected multiple subnets as in the example
image. Select Yes to confirm.
15. After you complete the instructions in the wizard, setup adds the second SQL
Server FCI node.
16. Repeat these steps on any other nodes that you want to add to the SQL Server
failover cluster instance.
7 Note
Azure Marketplace gallery images come with SQL Server Management Studio
installed. If you didn't use a marketplace image Download SQL Server
Management Studio (SSMS).
If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister the
SQL Server VM from the extension and register it again after your FCI is installed.
PowerShell
Configure connectivity
If you deployed your SQL Server VMs in multiple subnets, skip this step. If you deployed
your SQL Server VMs to a single subnet, then you'll need to configure an additional
component to route traffic to your FCI. You can configure a virtual network name (VNN)
with an Azure Load Balancer, or a distributed network name for a failover cluster
instance. Review the differences between the two and then deploy either a distributed
network name or a virtual network name and Azure Load Balancer for your failover
cluster instance.
Limitations
Microsoft Distributed Transaction Coordinator (MSDTC) is not supported on
Windows Server 2016 and earlier.
Filestream isn't supported for a failover cluster with a premium file share. To use
filestream, deploy your cluster by using Storage Spaces Direct or Azure shared
disks instead.
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal
management. See the table of benefits.
Database Snapshots are not currently supported with Azure Files due to sparse
files limitations.
Since database snapshots are not supported, CHECKDB for user databases falls
back to CHECKDB WITH TABLOCK. TABLOCK limits the checks that are performed -
DBCC CHECKCATALOG is not run on the database, and Service Broker data is not
validated.
DBCC CHECKDB on master and msdb database is not supported.
Databases that use the in-memory OLTP feature are not supported on a failover
cluster instance deployed with a premium file share. If your business requires in-
memory OLTP, consider deploying your FCI with Azure shared disks or Storage
Spaces Direct instead.
If your SQL Server VM has already been registered with the SQL IaaS Agent extension
and you've enabled any features that require the agent, you'll need to unregister from
the extension by deleting the SQL virtual machine resource for the corresponding VMs
and then register it with the SQL IaaS Agent extension again. When you're deleting the
SQL virtual machine resource by using the Azure portal, clear the check box next to the
correct virtual machine to avoid deleting the virtual machine.
Next steps
If premium file shares are not the appropriate FCI storage solution for you, consider
creating your FCI by using Azure shared disks or Storage Spaces Direct instead.
Applies to:
SQL Server on Azure VM
Tip
On Azure virtual machines, clusters use a load balancer to hold an IP address that needs
to be on one cluster node at a time. In this solution, the load balancer holds the IP
address for the virtual network name (VNN) that the clustered resource uses in Azure.
This article teaches you to configure a load balancer by using the Azure Load Balancer
service. The load balancer will route traffic to your failover cluster instance with SQL
Server on Azure VMs for high availability and disaster recovery (HADR).
For an alternative connectivity option for SQL Server 2019 CU2 and later, consider a
distributed network name (DNN) instead. A DNN offers simplified configuration and
improved failover.
Prerequisites
Before you complete the steps in this article, you should already have:
Determined that Azure Load Balancer is the appropriate connectivity option for
your FCI.
Configured your FCI.
Installed the latest version of PowerShell.
External: An external load balancer can route traffic from the public to internal
resources. When you configure an external load balancer, you can't use a public IP
address like the FCI IP address.
1. In the Azure portal , go to the resource group that contains the virtual machines.
2. Select Add. Search Azure Marketplace for load balancer. Select Load Balancer.
3. Select Create.
4. In Create load balancer, on the Basics tab, set up the load balancer by using the
following values:
5. Select Add to associate the backend pool with the availability set that contains the
VMs.
6. Under Virtual machine, choose the virtual machines that will participate as cluster
nodes. Be sure to include all virtual machines that will host the FCI.
Add only the primary IP address of each VM. Don't add any secondary IP
addresses.
3. Select Add.
2. Select Add.
4. Select Save.
Update the variables in the following script with values from your environment.
Remove the angle brackets ( < and > ) from the script.
PowerShell
$ILBIP = "<n.n.n.n>"
[int]$ProbePort = <nnnnn>
Import-Module FailoverClusters
The following table describes the values that you need to update:
Variable Value
Variable Value
ClusterNetworkName The name of the Windows Server failover cluster for the network. In
Failover Cluster Manager > Networks, right-click the network and
select Properties. The correct value is under Name on the General
tab.
IPResourceName The resource name for the IP address of the SQL Server FCI. In
Failover Cluster Manager > Roles, under the SQL Server FCI role,
under Server Name, right-click the IP address resource and select
Properties. The correct value is under Name on the General tab.
ILBIP The IP address of the internal load balancer. This address is configured
in the Azure portal as the internal load balancer's frontend address.
This is also the IP address of the SQL Server FCI. You can find it in
Failover Cluster Manager, on the same properties page where you
located the value for IPResourceName .
ProbePort The probe port that you configured in the load balancer's health
probe. Any unused TCP port is valid.
SubnetMask The subnet mask for the cluster parameter. It must be the TCP/IP
broadcast address: 255.255.255.255 .
After you set the cluster probe, you can see all the cluster parameters in PowerShell.
Run this script:
PowerShell
If your client doesn't support the MultiSubnetFailover parameter, you can modify the
RegisterAllProvidersIP and HostRecordTTL settings to prevent connectivity delays upon
failover.
Use PowerShell to modify the RegisterAllProvidersIp and HostRecordTTL settings:
PowerShell
To learn more, see the documentation about listener connection timeout in SQL Server.
Tip
clients can then reconnect more quickly. As such, reducing the HostRecordTTL
setting might increase traffic to the DNS servers.
Test failover
Test failover of the clustered resource to validate cluster functionality:
1. Connect to one of the SQL Server cluster nodes by using Remote Desktop Protocol
(RDP).
2. Open Failover Cluster Manager. Select Roles. Notice which node owns the SQL
Server FCI role.
3. Right-click the SQL Server FCI role.
4. Select Move, and then select Best Possible Node.
Failover Cluster Manager shows the role, and its resources go offline. The resources
then move and come back online in the other node.
Test connectivity
To test connectivity, sign in to another virtual machine in the same virtual network. Open
SQL Server Management Studio and connect to the SQL Server FCI name.
7 Note
If you need to, you can download SQL Server Management Studio.
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
Tip
On Azure Virtual Machines, the distributed network name (DNN) routes traffic to the
appropriate clustered resource. It provides an easier way to connect to the SQL Server
failover cluster instance (FCI) than the virtual network name (VNN), without the need for
an Azure Load Balancer.
This article teaches you to configure a DNN resource to route traffic to your failover
cluster instance with SQL Server on Azure VMs for high availability and disaster recovery
(HADR).
For an alternative connectivity option, consider a virtual network name and Azure Load
Balancer instead.
Overview
The distributed network name (DNN) replaces the virtual network name (VNN) as the
connection point when used with an Always On failover cluster instance on SQL Server
VMs. This negates the need for an Azure Load Balancer routing traffic to the VNN,
simplifying deployment, maintenance, and improving failover.
With an FCI deployment, the VNN still exists, but the client connects to the DNN DNS
name instead of the VNN name.
Prerequisites
Before you complete the steps in this article, you should already have:
SQL Server starting with either SQL Server 2019 CU8 and later, SQL Server 2017
CU25 and later, or SQL Server 2016 SP3 and later on Windows Server 2016
and later.
Decided that the distributed network name is the appropriate connectivity option
for your HADR solution.
Configured your failover cluster instances.
Installed the latest version of PowerShell.
The following PowerShell command adds a DNN resource to the SQL Server FCI cluster
group with a resource name of <dnnResourceName> . The resource name is used to
uniquely identify a resource. Use one that makes sense to you and is unique across the
cluster. The resource type must be Distributed Network Name .
The -Group value must be the name of the cluster group that corresponds to the SQL
Server FCI where you want to add the distributed network name. For a default instance,
the typical format is SQL Server (MSSQLSERVER) .
PowerShell
For example, to create your DNN resource dnn-demo for a default SQL Server FCI, use the
following PowerShell command:
PowerShell
Clients use the DNS name to connect to the SQL Server FCI. You can choose a unique
value. Or, if you already have an existing FCI and don't want to update client connection
strings, you can configure the DNN to use the current VNN that clients are already
using. To do so, you need to rename the VNN before setting the DNN in DNS.
Use this command to set the DNS name for your DNN:
PowerShell
The DNSName value is what clients use to connect to the SQL Server FCI. For example, for
clients to connect to FCIDNN , use the following PowerShell command:
PowerShell
Clients will now enter FCIDNN into their connection string when connecting to the SQL
Server FCI.
2 Warning
Some restrictions apply for renaming the VNN. For more information, see Renaming an
FCI.
If using the current VNN is not necessary for your business, skip this section. After
you've renamed the VNN, then set the cluster DNN DNS name.
PowerShell
For example, to start your DNN resource dnn-demo , use the following PowerShell
command:
PowerShell
The following is an example connection string for a SQL FCI DNN with the DNS name of
FCIDNN:
Additionally, if the DNN is not using the original VNN, SQL clients that connect to the
SQL Server FCI will need to update their connection string to the DNN DNS name. To
avoid this requirement, you can update the DNS name value to be the name of the
VNN. But you'll need to replace the existing VNN with a placeholder first.
Test failover
Test failover of the clustered resource to validate cluster functionality.
Failover Cluster Manager shows the role, and its resources go offline. The resources
then move and come back online in the other node.
Test connectivity
To test connectivity, sign in to another virtual machine in the same virtual network. Open
SQL Server Management Studio and connect to the SQL Server FCI by using the DNN
DNS name.
If you need to, you can download SQL Server Management Studio.
Avoid IP conflict
This is an optional step to prevent the virtual IP (VIP) address used by the FCI resource
from being assigned to another resource in Azure as a duplicate.
Although customers now use the DNN to connect to the SQL Server FCI, the virtual
network name (VNN) and virtual IP cannot be deleted as they are necessary components
of the FCI infrastructure. However, since there is no longer a load balancer reserving the
virtual IP address in Azure, there is a risk that another resource on the virtual network
will be assigned the same IP address as the virtual IP address used by the FCI. This can
potentially lead to a duplicate IP conflict issue.
APIPA address
To avoid using duplicate IP addresses, configure an APIPA address (also known as a link-
local address). To do so, run the following command:
PowerShell
–Multiple
@{"Address"="169.254.1.1";"SubnetMask"="255.255.0.0";"OverrideAddressMatch"=
1;"EnableDhcp"=0}
In this command, "virtual IP address" is the name of the clustered VIP address resource,
and "169.254.1.1" is the APIPA address chosen for the VIP address. Choose the address
that best suits your business. Set OverrideAddressMatch=1 to allow the IP address to be
on any network, including the APIPA address space.
Limitations
The client connecting to the DNN listener must support the
MultiSubnetFailover=True parameter in the connection string.
There might be more considerations when you're working with other SQL Server
features and an FCI with a DNN. For more information, see FCI with DNN
interoperability.
Next steps
To learn more, see:
Applies to:
SQL Server on Azure VM
Tip
There are certain SQL Server features that rely on a hard-coded virtual network name
(VNN). As such, when using the distributed network name (DNN) resource with your
failover cluster instance and SQL Server on Azure VMs, there are some additional
considerations.
In this article, learn how to configure the network alias when using the DNN resource, as
well as which SQL Server features require additional consideration.
For a default instance, you can map the VNN to the DNN DNS name directly, such that
VNN = DNN DNS name.
For example, if VNN name is FCI1 , instance name is
MSSQLSERVER , and the DNN is FCI1DNN (clients previously connected to FCI , and now
they connect to FCI1DNN ) then map the VNN FCI1 to the DNN FCI1DNN .
For a named instance the network alias mapping should be done for the full instance,
such that VNN\Instance = DNN\Instance .
For example, if VNN name is FCI1 , instance
name is instA , and the DNN is FCI1DNN (clients previously connected to FCI1\instA ,
and now they connect to FCI1DNN\instaA ) then map the VNN FCI1\instaA to the DNN
FCI1DNN\instaA .
Client drivers
For ODBC, OLEDB, ADO.NET, JDBC, PHP, and Node.js drivers, users need to explicitly
specify the DNN DNS name as the server name in the connection string. To ensure rapid
connectivity upon failover, add MultiSubnetFailover=True to the connection string if the
SQL client supports it.
Tools
Users of SQL Server Management Studio, sqlcmd, Azure Data Studio, and SQL Server
Data Tools need to explicitly specify the DNN DNS name as the server name in the
connection string.
The format for the mirroring endpoint is: ENDPOINT_URL = 'TCP://<DNN DNS name>:
<mirroring endpoint port>' .
For example, if your DNN DNS name is dnnlsnr , and 5022 is the port of the FCI's
mirroring endpoint, the Transact-SQL (T-SQL) code snippet to create the endpoint URL
looks like:
SQL
ENDPOINT_URL = 'TCP://dnnlsnr:5022'
Likewise, the format for the read-only routing URL is: TCP://<DNN DNS name>:<SQL Server
instance port> .
For example, if your DNN DNS name is dnnlsnr , and 1444 is the port used by the read-
only target SQL Server FCI, the T-SQL code snippet to create the read-only routing URL
looks like:
SQL
READ_ONLY_ROUTING_URL = 'TCP://dnnlsnr:1444'
You can omit the port in the URL if it is the default 1433 port. For a named instance,
configure a static port for the named instance and specify it in the read-only routing
URL.
Replication
Replication has three components: Publisher, Distributor, Subscriber. Any of these
components can be a failover cluster instance. Because the FCI VNN is heavily used in
replication configuration, both explicitly and implicitly, a network alias that maps the
VNN to the DNN might be necessary for replication to work.
Keep using the VNN name as the FCI name within replication, but create a network alias
in the following remote situations before you configure replication:
For example, assume you have a Publisher that's configured as an FCI using DNN in a
replication topology, and the Distributor is remote. In this case, create a network alias on
the Distributor server to map the Publisher VNN to the Publisher DNN:
Use the full instance name for a named instance, like the following image example:
Database mirroring
You can configure database mirroring with an FCI as either database mirroring partner.
Configure it by using Transact-SQL (T-SQL) rather than the SQL Server Management
Studio GUI. Using T-SQL will ensure that the database mirroring endpoint is created
using the DNN instead of the VNN.
For example, if your DNN DNS name is dnnlsnr , and the database mirroring endpoint is
7022, the following T-SQL code snippet configures the database mirroring partner:
SQL
SET PARTNER =
'TCP://dnnlsnr:7022'
GO
For client access, the Failover Partner property can handle database mirroring failover,
but not FCI failover.
MSDTC
The FCI can participate in distributed transactions coordinated by Microsoft Distributed
Transaction Coordinator (MSDTC). Clustered MSDTC and local MSDTC are supported
with FCI DNN. In Azure, an Azure Load Balancer is necessary for a clustered MSDTC
deployment.
Tip
The DNN defined in the FCI does not replace the Azure Load Balancer requirement
for the clustered MSDTC.
FileStream
Though FileStream is supported for a database in an FCI, accessing FileStream or
FileTable by using File System APIs with DNN is not supported.
Linked servers
Using a linked server with an FCI DNN is supported. Either use the DNN directly to
configure a linked server, or use a network alias to map the VNN to the DNN.
For example, to create a linked server with DNN DNS name dnnlsnr for named instance
insta1 , use the following Transact-SQL (T-SQL) command:
SQL
USE [master]
GO
EXEC master.dbo.sp_addlinkedserver
@server = N'dnnlsnr\inst1',
@srvproduct=N'SQL Server' ;
GO
Alternatively, you can create the linked server using the virtual network name (VNN)
instead, but you will then need to define a network alias to map the VNN to the DNN.
For example, for instance name insta1 , VNN name vnnname , and DNN name dnnlsnr ,
use the following Transact-SQL (T-SQL) command to create a linked server using the
VNN:
SQL
USE [master]
GO
EXEC master.dbo.sp_addlinkedserver
@server = N'vnnname\inst1',
@srvproduct=N'SQL Server' ;
GO
For DNN, the failover time will be just the FCI failover time, without any time added
(like probe time when you're using Azure Load Balancer).
Is there any version requirement for SQL clients to support DNN with OLEDB and
ODBC?
Are any SQL Server configuration changes required for me to use DNN?
SQL Server does not require any configuration change to use DNN, but some SQL
Server features might require more consideration.
Yes. The cluster binds the DNN in DNS with the physical IP addresses of all nodes
in the cluster regardless of the subnet. The SQL client tries all IP addresses of the
DNS name regardless of the subnet.
Next steps
To learn more, see:
e OVERVIEW
Get started
Support Lifecycle
d TRAINING
Choose the best Azure command line tools for managing and provisioning your cloud
infrastructure
Installation
a DOWNLOAD
Install
Install - Windows
Install - Linux
Install - macOS
What's new
h WHAT'S NEW
Release notes
i REFERENCE
Cmdlet reference
h WHAT'S NEW
c HOW-TO GUIDE
Authentication methods
Credential Contexts
Concepts
c HOW-TO GUIDE
Manage subscriptions
Format output
PowerShell jobs
g TUTORIAL
Create virtual machines
Configuration
c HOW-TO GUIDE
Deploy
` DEPLOY
Samples
s SAMPLE
SQL databases
Cosmos DB
Samples repo
e OVERVIEW
Migration steps
f QUICKSTART
e OVERVIEW
Troubleshoot
Commands
az sql Manage Azure SQL Databases and Data Warehouses.
Transact-SQL reference (Database
Engine)
Article • 07/12/2023
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW) SQL Endpoint in
Microsoft Fabric Warehouse in Microsoft Fabric
This article gives the basics about how to find and use the Microsoft Transact-SQL (T-
SQL) reference articles. T-SQL is central to using Microsoft SQL products and services. All
tools and applications that communicate with a SQL Server database do so by sending
T-SQL commands.
For example, this article applies to all versions, and has the following label.
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW)
Another example, the following label indicates an article that applies only to Azure
Synapse Analytics and Parallel Data Warehouse.
In some cases, the article is used by a product or service, but all of the arguments aren't
supported. In this case, other Applies to sections are inserted into the appropriate
argument descriptions in the body of the article.
Next steps
Tutorial: Writing Transact-SQL Statements
Transact-SQL Syntax Conventions (Transact-SQL)
Connection modules for Microsoft SQL
Database
Article • 07/19/2023
This article provides download links to connection modules or drivers that your client
programs can use for interacting with Microsoft SQL Server, and with its twin in the
cloud Azure SQL Database. Drivers are available for a variety of programming
languages, running on the following operating systems:
Linux
macOS
Windows
OOP-to-relational mismatch:
ORM: Other drivers or frameworks return queried data in the OOP format, avoiding the
mismatch. These drivers work by expecting that classes have been defined to match the
data columns of particular SQL tables. The driver then performs the object-relational
mapping (ORM) to return queried data as an instance of a class. Microsoft's Entity
Framework (EF) for C#, and Hibernate for Java, are two examples.
The present article devotes separate sections to these two kinds of connection drivers.
C# ADO.NET
Microsoft.Data.SqlClient
.NET Core for: Linux-Ubuntu, macOS, Windows
Entity Framework Core
Entity Framework
C++ ODBC
OLE DB
Language Download the SQL driver
Java JDBC
PHP PHP
Go GORM
Python Django
SQL Server backend for Django
Build-an-app webpages
https://aka.ms/sqldev takes you to a set of Build-an-app webpages. The webpages
provide information about numerous combinations of programming language,
operating system, and SQL connection driver. Among the information provided by the
Build-an-app webpages are the following items:
Details about how to get started from the very beginning, for each combination of
language + operating system + driver.
Instructions for installing the latest SQL connection drivers.
Code examples for each of the following items:
Object-relational code examples.
ORM code examples.
Columnstore index demonstrations for much faster performance.
Related links
Code examples for connecting to Azure SQL Database in the cloud, with Java and
other languages.
Frequently asked questions for
SQL Server on Azure VMs
FAQ
This article provides answers to some of the most common questions about running
SQL Server on Windows Azure Virtual Machines (VMs) .
If your Azure issue is not addressed in this article, visit the Azure forums on Microsoft Q
& A and Stack Overflow . You can post your issue in these forums, or post to
@AzureSupport on Twitter . You also can submit an Azure support request. To submit a
support request, on the Azure support page, select Get support.
Images
What SQL Server virtual machine gallery images
are available?
Azure maintains virtual machine images for all supported major releases of SQL Server
on all editions for both Windows and Linux. For more information, see the complete list
of Windows VM images and Linux VM images.
Alternatively, use one of the SQL Server images from Azure Marketplace to generalize
SQL Server on Azure VM. Note that you must delete the following registry key in the
source image before creating your own image. Failure to do so can result in the bloating
of the SQL Server setup bootstrap folder and/or SQL IaaS Agent extension in failed
state.
Registry Key path:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\SysPrep
External\Specialize
7 Note
SQL Server on Azure VMs, including those deployed from custom generalized
images, should be registered with the SQL IaaS Agent extension to meet
compliance requirements and to utilize optional features such as automated
patching and automatic backups. The extension also allows you to specify the
license type for each SQL Server VM.
Creation
How do I create an Azure virtual machine with
SQL Server?
The easiest method is to create a virtual machine that includes SQL Server. For a tutorial
on signing up for Azure and creating a SQL Server VM from the portal, see Provision a
SQL Server virtual machine in the Azure portal. You can select a virtual machine image
that uses pay-per-second SQL Server licensing, or you can use an image that allows you
to bring your own SQL Server license. You also have the option of manually installing
SQL Server on a VM with either a freely licensed edition (Developer or Express) or by
reusing an on-premises license. Be sure to register your SQL Server VM with the SQL
IaaS Agent extension to manage your SQL Server VM in the portal, as well as utilize
features such as automated patching and automatic backups. If you bring your own
license, you must have License Mobility through Software Assurance on Azure . For
more information, see Pricing guidance for SQL Server Azure VMs.
Licensing
How can I install my licensed copy of SQL Server
on an Azure VM?
There are three ways to do this. If you're an Enterprise Agreement (EA) customer, you
can provision one of the virtual machine images. If you have Software Assurance , you
can enable the Azure Hybrid Benefit on an existing pay-as-you-go (PAYG) image. Or you
can copy the SQL Server installation media to a Windows Server VM, and then install
SQL Server on the VM. Be sure to register your SQL Server VM with the extension for
features such as portal management, automated backup and automated patching.
Administration
Can I install a second instance of SQL Server on
the same VM? Can I change installed features of
the default instance?
Yes. The SQL Server installation media is located in a folder on the C drive. Run
Setup.exe from that location to add new SQL Server instances or to change other
installed features of SQL Server on the machine. Note that some features, such as
Automated Backup, Automated Patching, and Azure Key Vault Integration, only operate
against the default instance, or a named instance that was configured properly (See
Question 3). Customers using Software Assurance through the Azure Hybrid Benefit or
the pay-as-you-go licensing model can install multiple instances of SQL Server on the
virtual machine without incurring extra licensing costs. Additional SQL Server instances
may strain system resources unless configured correctly.
If you do decide to uninstall the default instance, also uninstall the SQL Server IaaS
Agent Extension as well.
to change other installed features of SQL Server on the machine. You can also copy this
setup media to other virtual machines to install, or upgrade, that same version and
edition of SQL Server. Customers who have Software Assurance can obtain their
installation media from the Volume Licensing Center .
) Important
SQL Server FCIs registered with the extension do not support features that require
the agent, such as automated backup, patching, and advanced portal management.
Review feature benefits to learn more.
Resources
Windows VMs:
Linux VMs:
Applies to:
SQL Server on Azure VM
This article provides pricing guidance for SQL Server on Azure Virtual Machines. There
are several options that affect cost, and it is important to pick the right image that
balances costs with business requirements.
Tip
If you only need to find out a cost estimate for a specific combination of SQL Server
edition and virtual machine (VM) size, see the pricing page for Windows or
Linux . Select your platform and SQL Server edition from the OS/Software list.
If you want to run a lightweight workload in production (<4 cores, <1-GB memory, <10
GB/database), use the freely licensed SQL Server Express edition. A SQL Server Express
edition VM also only incurs charges for the cost of the VM.
For these development/test and lightweight production workloads, you can also save
money by choosing a smaller VM size that matches these workloads. The D2as_v5 might
be a good choice in some scenarios.
To create an Azure VM running SQL Server 2022 with one of these images, see the
following links:
You have two options to pay for SQL Server licensing for these editions: pay per usage or
Azure Hybrid Benefit.
The cost is the same for all versions of SQL Server (2012 SP3 to 2022). The per-second
licensing cost depends on the number of VM vCPUs.
Workloads with unknown lifetime or scale. For example, an app that may not be
required in a few months, or which may require more, or less compute power,
depending on demand.
To create an Azure VM running SQL Server 2022 with one of these pay-as-you-go
images, see the following links:
) Important
When you create a SQL Server virtual machine in the Azure portal, the Choose a
size window shows an estimated cost. It is important to note that this estimate is
only the compute costs for running the VM along with any OS licensing costs
(Windows or third-party Linux operating systems).
It does not include additional SQL Server licensing costs for Web, Standard, and
Enterprise editions. To get the most accurate pricing estimate, select your operating
system and SQL Server edition on the pricing page for Windows or Linux .
7 Note
Bringing your own SQL Server licensing through Azure Hybrid Benefit is recommended
for:
Workloads with known lifetime and scale. For example, an app that is required for
the whole year and which demand has been forecasted.
To use AHB with a SQL Server VM, you must have a license for SQL Server Standard or
Enterprise and Software Assurance , which is a required option through some volume
licensing programs and an optional purchase with others. The pricing level provided
through Volume Licensing programs varies, based on the type of agreement and the
quantity and or commitment to SQL Server. But as a rule of thumb, Azure Hybrid Benefit
for continuous production workloads has the following benefits:
AHB Description
benefit
Cost The Azure Hybrid Benefit offers up to 55% savings. For more information, see
savings Switch licensing model
Free Another benefit of bringing your own license is the free licensing for one passive
passive secondary replica for high availability and one passive secondary for disaster
secondary recovery per SQL Server. This cuts the licensing cost of a highly available SQL Server
replica deployment (for example, using Always On availability groups) by more than half.
7 Note
As of November 2022, it's possible to use free licensing for one passive secondary
replica for high availability and one passive secondary replica for disaster recovery
when using pay-as-you-go licensing as well as AHB .
Reduce costs
To avoid unnecessary costs, choose an optimal virtual machine size and consider
intermittent shutdowns for non-continuous workloads.
For more information on choosing the best VM size for your workload, see VM size best
practices.
For example, if you are simply trying out SQL Server on an Azure VM, you would not
want to incur charges by accidentally leaving it running for weeks. One solution is to use
the automatic shutdown feature .
Automatic shutdown is part of a larger set of similar features provided by Azure DevTest
Labs .
For other workflows, consider automatically shutting down and restarting Azure VMs
with a scripting solution, such as Azure Automation .
) Important
Shutting down and deallocating your VM is the only way to avoid charges. Simply
stopping or using power options to shut down the VM still incurs usage charges.
Next steps
For general Azure pricing guidance, see Prevent unexpected costs with Azure billing and
cost management. For the latest Azure Virtual Machines pricing, including SQL Server,
see the Azure Virtual Machines pricing page for Windows VMs and Linux VMs .
For an overview of SQL Server on Azure Virtual Machines, see the following articles:
Applies to:
SQL Server
Azure SQL Database
Azure Synapse Analytics
SQL Server Data Tools (SSDT) is a modern development tool for building SQL Server
relational databases, databases in Azure SQL, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, you
can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.
7 Note
To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.
1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.
3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.
For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .
Analysis Services
Integration Services
Reporting Services
Relational databases SQL Server 2016 (13.x) - SQL Server 2022 (16.x)
With Visual Studio 2019, the required functionality to enable Analysis Services,
Integration Services, and Reporting Services projects has moved into the respective
Visual Studio (VSIX) extensions only.
7 Note
1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.
3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.
For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .
Analysis Services
Integration Services
Reporting Services
Offline installation
For scenarios where offline installation is required, such as low bandwidth or isolated
networks, SSDT is available for offline installation. Two approaches are available:
For more details you can follow the Step-by-Step Guidelines for Offline Installation
Previous versions
To download and install SSDT for Visual Studio 2017, or an older version of SSDT, see
Previous releases of SQL Server Data Tools (SSDT and SSDT-BI).
See Also
SSDT MSDN Forum
Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback
Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
SQL Endpoint in Microsoft Fabric
Warehouse in
Microsoft Fabric
SQL Server Management Studio (SSMS) is an integrated environment for managing any
SQL infrastructure, from SQL Server to Azure SQL Database. SSMS provides tools to
configure, monitor, and administer instances of SQL Server and databases. Use SSMS to
deploy, monitor, and upgrade the data-tier components used by your applications and
build queries and scripts.
Use SSMS to query, design, and manage your databases and data warehouses, wherever
they are - on your local computer or in the cloud.
Download SSMS
Free Download for SQL Server Management Studio (SSMS) 19.1
SSMS 19.1 is the latest general availability (GA) version. If you have a preview version of
SSMS 19 installed, you should uninstall it before installing SSMS 19.1. If you have SSMS
19.x installed, installing SSMS 19.1 upgrades it to 19.1.
By using SQL Server Management Studio, you agree to its license terms and privacy
statement . If you have comments or suggestions or want to report issues, the best
way to contact the SSMS team is at SQL Server user feedback .
The SSMS 19.x installation doesn't upgrade or replace SSMS versions 18.x or earlier.
SSMS 19.x installs alongside previous versions, so both versions are available for use.
However, if you have an earlier preview version of SSMS 19 installed, you must uninstall
it before installing SSMS 19.1. You can see if you have a preview version by going to the
Help > About window.
If a computer contains side-by-side installations of SSMS, verify you start the correct
version for your specific needs. The latest version is labeled Microsoft SQL Server
Management Studio v19.1.
) Important
Beginning with SQL Server Management Studio (SSMS) 18.7, Azure Data Studio is
automatically installed alongside SSMS. Users of SQL Server Management Studio
are now able to benefit from the innovations and features in Azure Data Studio.
Azure Data Studio is a cross-platform and open-source desktop tool for your
environments, whether in the cloud, on-premises, or hybrid.
To learn more about Azure Data Studio, check out What is Azure Data Studio or
the FAQ.
Available languages
This release of SSMS can be installed in the following languages:
Tip
If you are accessing this page from a non-English language version and want to see
the most up-to-date content, please select Read in English at the top of this page.
You can download different languages from the US-English version site by selecting
available languages.
7 Note
The SQL Server PowerShell module is a separate install through the PowerShell
Gallery. For more information, see Download SQL Server PowerShell Module.
What's new
For details and more information about what's new in this release, see Release notes for
SQL Server Management Studio.
Previous versions
This article is for the latest version of SSMS only. To download previous versions of
SSMS, visit Previous SSMS releases.
7 Note
Connectivity to Azure Analysis Services through Azure Active Directory with MFA
requires SSMS 18.5.1 or later.
Unattended install
You can install SSMS using PowerShell.
Follow the steps below if you want to install SSMS in the background with no GUI
prompts.
PowerShell
Example:
PowerShell
$media_path = "C:\Installers\SSMS-Setup-ENU.exe"
$install_path = "$env:SystemDrive\SSMSto"
You can also pass /Passive instead of /Quiet to see the setup UI.
Uninstall
SSMS may install shared components if it's determined they're missing during SSMS
installation. SSMS won't automatically uninstall these components when you uninstall
SSMS.
Windows 11 (64-bit)
Windows 10 (64-bit) version 1607 (10.0.14393) or later
Windows Server 2022 (64-bit)
Windows Server 2019 (64-bit)
Windows Server 2016 (64-bit)
Supported hardware:
1.8 GHz or faster x86 (Intel, AMD) processor. Dual-core or better recommended
2 GB of RAM; 4 GB of RAM recommended (2.5 GB minimum if running on a virtual
machine)
Hard disk space: Minimum of 2 GB up to 10 GB of available space
7 Note
SSMS is available only as a 32-bit application for Windows. If you need a tool that
runs on operating systems other than Windows, we recommend Azure Data Studio.
Azure Data Studio is a cross-platform tool that runs on macOS, Linux, and
Windows. For details, see Azure Data Studio.
Get help for SQL tools
All the ways to get help
SSMS user feedback .
Submit an Azure Data Studio Git issue
Contribute to Azure Data Studio
SQL Client Tools Forum
SQL Server Data Tools - MSDN forum
Support options for business users
Next steps
SQL tools
SQL Server Management Studio documentation
Azure Data Studio
Download SQL Server Data Tools (SSDT)
Latest updates
Azure Data Architecture Guide
SQL Server Blog
Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
Analytics Platform System (PDW)
To manage your database, you need a tool. Whether your databases run in the cloud, on
Windows, on macOS, or on Linux, your tool doesn't need to run on the same platform as
the database.
You can view the links to the different SQL tools in the following tables.
7 Note
Recommended tools
The following tools provide a graphical user interface (GUI).
A light-weight editor that can run on-demand SQL queries, view and Windows
save results as text, JSON, or Excel. Edit data, organize your favorite macOS
Azure Data
Studio
Manage a SQL Server instance or database with full GUI support. Windows
Access, configure, manage, administer, and develop all components
of SQL Server, Azure SQL Database, and Azure Synapse Analytics.
Provides a single comprehensive utility that combines a broad
SQL Server group of graphical tools with a number of rich script editors to
Management provide access to SQL for developers and database administrators
Studio of all skill levels.
(SSMS)
Tool Description Operating
system
The mssql extension for Visual Studio Code is the official SQL Windows
Server extension that supports connections to SQL Server and rich macOS
editing experience for T-SQL in Visual Studio Code. Write T-SQL Linux
scripts in a light-weight editor.
Visual Studio
Code
Command-line tools
The tools below are the main command-line tools.
bcp The bulk copy program utility (bcp) bulk copies data between an Windows
format. Linux
mssql-cli mssql-cli is an interactive command-line tool for querying SQL Server. Windows
(preview) Also, query SQL Server with a command-line tool that features macOS
(preview) Linux
sqlcmd sqlcmd utility lets you enter Transact-SQL statements, system Windows
Linux
Linux
Tool Description Operating
system
SQL Server SQL Server PowerShell provides cmdlets for working with SQL. Windows
PowerShell macOS
Linux
Tool Description
Configuration Use SQL Server Configuration Manager to configure SQL Server services and
Manager configure network connectivity. Configuration Manager runs on Windows
Data Migration The Data Migration Assistant tool helps you upgrade to a modern data
Assistant platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.
Distributed Use the Distributed Replay feature to help you assess the impact of future SQL
Replay Server upgrades. Also use Distributed Replay to help assess the impact of
hardware and operating system upgrades, and SQL Server tuning.
ssbdiagnose The ssbdiagnose utility reports issues in Service Broker conversations or the
configuration of Service Broker services.
SQL Server Use SQL Server Migration Assistant to automate database migration to SQL
Migration Server from Microsoft Access, DB2, MySQL, Oracle, and Sybase.
Assistant
If you're looking for additional tools that aren't mentioned on this page, see SQL
Command Prompt Utilities and Download SQL Server extended features and tools
Overview of SQL Server on Linux Azure
Virtual Machines
Article • 09/19/2022
Applies to:
SQL Server on Azure VM
SQL Server on Azure Virtual Machines enables you to use full versions of SQL Server in
the cloud without having to manage any on-premises hardware. SQL Server VMs also
simplify licensing costs when you pay as you go.
Azure virtual machines run in many different geographic regions around the world.
They also offer a variety of machine sizes. The virtual machine image gallery allows you
to create a SQL Server VM with the right version, edition, and operating system. This
makes virtual machines a good option for a many different SQL Server workloads.
If you're new to Azure SQL, check out the SQL Server on Azure VM Overview video from
our in-depth Azure SQL video series:
https://learn.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-
Overview-4-of-61/player
Tip
For more information about how to understand pricing for SQL Server images, see
the pricing page for Linux VMs running SQL Server .
SQL Server Red Hat Enterprise Linux (RHEL) 8 Enterprise , Standard , Web ,
2019 Developer
SQL Server Red Hat Enterprise Linux (RHEL) Enterprise , Standard , Web , Express ,
2017 7.4 Developer
SQL Server SUSE Linux Enterprise Server Enterprise , Standard , Web , Express ,
2017 (SLES) v12 SP2 Developer
7 Note
To see the available SQL Server virtual machine images for Windows, see Overview
of SQL Server on Azure Virtual Machines (Windows).
Installed packages
When you configure SQL Server on Linux, you install the Database Engine package and
then several optional packages depending on your requirements. The Linux virtual
machine images for SQL Server automatically install most packages for you. The
following table shows which packages are installed for each distribution.
RHEL
SLES
Ubuntu
7 Note
SQL IaaS Agent extension for SQL Server on Azure Linux Virtual Machines is only
available for Ubuntu Linux distribution.
Storage
Introduction to Microsoft Azure Storage
Networking
Virtual Network overview
IP addresses in Azure
Create a Fully Qualified Domain Name in the Azure portal
SQL
SQL Server on Linux documentation
Azure SQL Database comparison
Next steps
Get started with SQL Server on Linux virtual machines:
Get answers to commonly asked questions about SQL Server VMs on Linux:
Applies to:
SQL Server on Azure VM
In this quickstart tutorial, you use the Azure portal to create a Linux virtual machine with
SQL Server 2017 installed. You learn the following:
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
5. In the search box, type SQL Server 2019, and select Enter to start the search.
6. Limit the search results by selecting Operating system > Redhat.
7. Select a SQL Server 2019 Linux image from the search results. This tutorial uses
SQL Server 2019 on RHEL74.
Tip
The Developer edition lets you test or develop with the features of the
Enterprise edition but no SQL Server licensing costs. You only pay for the cost
of running the Linux VM.
8. Select Create.
Change size: Select this option to pick a machine size and when done,
choose Select. For more information about VM machine sizes, see VM sizes.
Tip
For development and functional testing, use a VM size of DS2 or higher. For
performance testing, use DS13 or higher.
7 Note
You have the choice of using an SSH public key or a Password for
authentication. SSH is more secure. For instructions on how to generate
an SSH key, see Create SSH keys on Linux and Mac for Linux VMs in
Azure.
Public inbound ports: Choose Allow selected ports and pick the SSH (22)
port in the Select public inbound ports list. In this quickstart, this step is
necessary to connect and complete the SQL Server configuration. If you want
to remotely connect to SQL Server, you will need to manually allow traffic to
the default port (1433) used by Microsoft SQL Server for connections over the
Internet after the virtual machine is created.
4. Make any changes you want to the settings in the following additional tabs or
keep the default settings.
Disks
Networking
Management
Guest config
Tags
Bash
ssh azureadmin@40.55.55.555
2. Run PuTTY.
4. Select Open and enter your username and password at the prompts.
For more information about connecting to Linux VMs, see Create a Linux VM on Azure
using the Portal.
7 Note
If you see a PuTTY security alert about the server's host key not being cached in the
registry, choose from the following options. If you trust this host, select Yes to add
the key to PuTTy's cache and continue connecting. If you want to carry on
connecting just once, without adding the key to the cache, select No. If you don't
trust this host, select Cancel to abandon the connection.
Bash
sudo systemctl stop mssql-server
Bash
1. Run the following commands to modify the PATH for both login sessions and
interactive/non-login sessions:
Bash
source ~/.bashrc
Tip
If you selected the inbound port MS SQL (1433) in the settings during provisioning,
these changes have been made for you. You can go to the next section on how to
configure the firewall.
1. In the portal, select Virtual machines, and then select your SQL Server VM.
3. In the Networking window, select Add inbound port under Inbound Port Rules.
Bash
Next steps
Now that you have a SQL Server 2017 virtual machine in Azure, you can connect locally
with sqlcmd to run Transact-SQL queries.
If you configured the Azure VM for remote SQL Server connections, you should be able
to connect remotely. For an example of how to connect remotely to SQL Server on Linux
from Windows, see Use SSMS on Windows to connect to SQL Server on Linux. To
connect with Visual Studio Code, see Use Visual Studio Code to create and run Transact-
SQL scripts for SQL Server
For more general information about SQL Server on Linux, see Overview of SQL Server
2017 on Linux. For more information about using SQL Server 2017 Linux virtual
machines, see Overview of SQL Server 2017 virtual machines on Azure.
SQL Server IaaS Agent extension for
Linux
Article • 05/22/2023
Applies to:
SQL Server on Azure VM
The SQL Server IaaS Agent extension (SqlIaasExtension) runs on SQL Server on Linux
Azure Virtual Machines (VMs) to automate management and administration tasks.
This article provides an overview of the extension. See Register with the extension to
learn more.
Overview
The SQL Server IaaS Agent extension enables integration with the Azure portal and
unlocks the following benefits for SQL Server on Linux Azure VMs:
PowerShell
PowerShell
Installation
Register your SQL Server VM with the SQL Server IaaS Agent extension to create the
SQL virtual machine resource within your subscription, which is a separate resource from
the virtual machine resource. Unregistering your SQL Server VM from the extension will
remove the SQL virtual machine resource from your subscription but will not drop the
actual virtual machine.
The SQL Server IaaS Agent extension for Linux is currently only available with limited
functionality.
Azure portal
Verify the extension is installed by using the Azure portal.
Go to your Virtual machine resource in the Azure portal (not the SQL virtual machines
resource, but the resource for your VM). Select Extensions under Settings. You should
see the SqlIaasExtension extension listed, as in the following example:
Azure PowerShell
You can also use the Get-AzVMSqlServerExtension Azure PowerShell cmdlet:
PowerShell
Get-AzVMSqlServerExtension -VMName "vmname" -ResourceGroupName
"resourcegroupname"
The previous command confirms that the agent is installed and provides general status
information. You can get specific status information about automated backup and
patching by using the following commands:
PowerShell
$sqlext.AutoPatchingSettings
$sqlext.AutoBackupSettings
Limitations
The Linux SQL IaaS Agent extension has the following limitations:
Only SQL Server VMs running on the Ubuntu Linux operating system are
supported. Other Linux distributions are not currently supported.
SQL Server VMs running Ubuntu Linux Pro are not supported.
SQL Server VMs running on generalized images are not supported.
Only SQL Server VMs deployed through the Azure Resource Manager are
supported. SQL Server VMs deployed through the classic model are not supported.
SQL Server with only a single instance. Multiple instances are not supported.
Privacy statement
When using SQL Server on Azure VMs and the SQL IaaS Agent extension, consider the
following privacy statements:
Data collection: The SQL IaaS Agent extension collects data for the express
purpose of giving customers optional benefits when using SQL Server on Azure
Virtual Machines. Microsoft will not use this data for licensing audits without the
customer's advance consent. See the SQL Server privacy supplement for more
information.
In-region data residency: SQL Server on Azure VMs and SQL IaaS Agent Extension
do not move or store customer data out of the region in which the VMs are
deployed.
Next steps
For more information about running SQL Server on Azure Virtual Machines, see the
What is SQL Server on Azure Linux Virtual Machines?.
Applies to:
SQL Server on Azure VM
Register your SQL Server VM with the SQL IaaS Agent extension to unlock a wealth of
feature benefits for your SQL Server on Linux Azure VM.
Overview
Registering with the SQL Server IaaS Agent extension creates the SQL virtual machine
resource within your subscription, which is a separate resource from the virtual machine
resource. Unregistering your SQL Server VM from the extension removes the SQL virtual
machine resource but will not drop the actual virtual machine.
To utilize the SQL IaaS Agent extension, you must first register your subscription with
the Microsoft.SqlVirtualMachine provider, which gives the SQL IaaS Agent extension
the ability to create resources within that specific subscription.
) Important
The SQL IaaS Agent extension collects data for the express purpose of giving
customers optional benefits when using SQL Server within Azure Virtual Machines.
Microsoft will not use this data for licensing audits without the customer's advance
consent. See the SQL Server privacy supplement for more information.
Prerequisites
To register your SQL Server VM with the extension, you'll need:
An Azure subscription .
An Azure Resource Model Ubuntu Linux virtual machine with SQL Server 2017 (or
greater) deployed to the public or Azure Government cloud.
The latest version of Azure CLI or Azure PowerShell (5.0 minimum).
Azure portal
Register your subscription with the resource provider by using the Azure portal:
Command line
Register your Azure subscription with the Microsoft.SqlVirtualMachine provider using
either Azure CLI or Azure PowerShell.
Azure CLI
Register your subscription with the resource provider by using the Azure CLI:
Azure CLI
Register VM
The SQL IaaS Agent extension on Linux is only available in lightweight mode, which
supports only changing the license type and edition of SQL Server. Use the Azure CLI or
Azure PowerShell to register your SQL Server VM with the extension in lightweight
mode for limited functionality.
Provide the SQL Server license type as either pay-as-you-go ( PAYG ) to pay per usage,
Azure Hybrid Benefit ( AHUB ) to use your own license, or disaster recovery ( DR ) to
activate the free DR replica license.
Azure CLI
Azure CLI
Azure portal
Verify the registration status by using the Azure portal:
Command line
Verify current SQL Server VM registration status using either Azure CLI or Azure
PowerShell. ProvisioningState shows as Succeeded if registration was successful.
Azure CLI
Azure CLI
An error indicates that the SQL Server VM has not been registered with the extension.
Automatic registration
Automatic registration is supported for Ubuntu Linux VMs.
Next steps
For more information, see the following articles:
Applies to:
SQL Server on Azure VM
7 Note
We use SQL Server 2017 with RHEL 7.6 in this tutorial, but it is possible to use SQL
Server 2019 in RHEL 7 or RHEL 8 to configure high availability. The commands to
configure the pacemake cluster and availability group resources has changed in
RHEL 8, and you'll want to look at the article Create availability group resource and
RHEL 8 resources for more information on the correct commands.
" Create a new resource group, availability set, and Linux virtual machines (VMs)
" Enable high availability (HA)
" Create a Pacemaker cluster
" Configure a fencing agent by creating a STONITH device
" Install SQL Server and mssql-tools on RHEL
" Configure SQL Server Always On availability group
" Configure availability group (AG) resources in the Pacemaker cluster
" Test a failover and the fencing agent
This tutorial will use the Azure CLI to deploy resources in Azure.
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see
Quickstart for Bash in Azure Cloud Shell.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're
running on Windows or macOS, consider running Azure CLI in a Docker container.
For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login
command. To finish the authentication process, follow the steps displayed in
your terminal. For other sign-in options, see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more
information about extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To
upgrade to the latest version, run az upgrade.
This article requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud
Shell, the latest version is already installed.
Azure CLI
Azure CLI
az vm availability-set create \
--resource-group <resourceGroupName> \
--name <availabilitySetName> \
--platform-fault-domain-count 2 \
--platform-update-domain-count 2
You should get the following results once the command completes:
Output
"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/availabilitySets/<availabilitySetName>",
"location": "eastus2",
"name": "<availabilitySetName>",
"platformFaultDomainCount": 2,
"platformUpdateDomainCount": 2,
"proximityPlacementGroup": null,
"resourceGroup": "<resourceGroupName>",
"sku": {
"capacity": null,
"name": "Aligned",
"tier": null
},
"statuses": null,
"tags": {},
"type": "Microsoft.Compute/availabilitySets",
"virtualMachines": []
2 Warning
If you choose a Pay-As-You-Go (PAYG) RHEL image, and configure high availability
(HA), you may be required to register your subscription. This can cause you to pay
twice for the subscription, as you will be charged for the Microsoft Azure RHEL
subscription for the VM, and a subscription to Red Hat. For more information, see
https://access.redhat.com/solutions/2458541 .
To avoid being "double billed", use a RHEL HA image when creating the Azure VM.
Images offered as RHEL-HA images are also PAYG images with HA repo pre-
enabled.
1. Get a list of virtual machine images that offer RHEL with HA:
Azure CLI
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "7.4",
"urn": "RedHat:RHEL-HA:7.4:7.4.2019062021",
"version": "7.4.2019062021"
},
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "7.5",
"urn": "RedHat:RHEL-HA:7.5:7.5.2019062021",
"version": "7.5.2019062021"
},
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "7.6",
"urn": "RedHat:RHEL-HA:7.6:7.6.2019062019",
"version": "7.6.2019062019"
},
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "8.0",
"urn": "RedHat:RHEL-HA:8.0:8.0.2020021914",
"version": "8.0.2020021914"
},
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "8.1",
"urn": "RedHat:RHEL-HA:8.1:8.1.2020021914",
"version": "8.1.2020021914"
},
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "80-gen2",
"urn": "RedHat:RHEL-HA:80-gen2:8.0.2020021915",
"version": "8.0.2020021915"
},
"offer": "RHEL-HA",
"publisher": "RedHat",
"sku": "81_gen2",
"urn": "RedHat:RHEL-HA:81_gen2:8.1.2020021915",
"version": "8.1.2020021915"
You can also choose SQL Server 2019 pre-installed on RHEL8-HA images. To get
the list of these images, run the following command:
Azure CLI
Output
"offer": "sql2019-rhel8",
"publisher": "MicrosoftSQLServer",
"sku": "enterprise",
"urn": "MicrosoftSQLServer:sql2019-rhel8:enterprise:15.0.200317",
"version": "15.0.200317"
},
"offer": "sql2019-rhel8",
"publisher": "MicrosoftSQLServer",
"sku": "enterprise",
"urn": "MicrosoftSQLServer:sql2019-rhel8:enterprise:15.0.200512",
"version": "15.0.200512"
},
"offer": "sql2019-rhel8",
"publisher": "MicrosoftSQLServer",
"sku": "sqldev",
"urn": "MicrosoftSQLServer:sql2019-rhel8:sqldev:15.0.200317",
"version": "15.0.200317"
},
"offer": "sql2019-rhel8",
"publisher": "MicrosoftSQLServer",
"sku": "sqldev",
"urn": "MicrosoftSQLServer:sql2019-rhel8:sqldev:15.0.200512",
"version": "15.0.200512"
},
"offer": "sql2019-rhel8",
"publisher": "MicrosoftSQLServer",
"sku": "standard",
"urn": "MicrosoftSQLServer:sql2019-rhel8:standard:15.0.200317",
"version": "15.0.200317"
},
"offer": "sql2019-rhel8",
"publisher": "MicrosoftSQLServer",
"sku": "standard",
"urn": "MicrosoftSQLServer:sql2019-rhel8:standard:15.0.200512",
"version": "15.0.200512"
If you do use one of the above images to create the virtual machines, it has SQL
Server 2019 pre-installed. Skip the Install SQL Server and mssql-tools section as
described in this article.
) Important
2. We want to create 3 VMs in the availability set. Replace the following in the
command below:
<resourceGroupName>
<VM-basename>
<availabilitySetName>
<username>
<adminPassword>
Azure CLI
az vm create \
--resource-group <resourceGroupName> \
--name <VM-basename>$i \
--availability-set <availabilitySetName> \
--size "<VM-Size>" \
--image "RedHat:RHEL-HA:7.6:7.6.2019062019" \
--admin-username "<username>" \
--admin-password "<adminPassword>" \
--authentication-type all \
--generate-ssh-keys
done
The above command creates the VMs, and creates a default VNet for those VMs. For
more information on the different configurations, see the az vm create article.
You should get results similar to the following once the command completes for each
VM:
Output
"fqdns": "",
"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/virtualMachines/<VM1>",
"location": "eastus2",
"privateIpAddress": "<IP1>",
"publicIpAddress": "",
"resourceGroup": "<resourceGroupName>",
"zones": ""
) Important
The default image that is created with the above command creates a 32GB OS disk
by default. You could potentially run out of space with this default installation. You
can use the following parameter added to the above az vm create command to
create an OS disk with 128GB as an example: --os-disk-size-gb 128 .
You can then configure Logical Volume Manager (LVM) if you need to expand
appropriate folder volumes to accomodate your installation.
Azure CLI
ssh <username>@publicipaddress
If the connection is successful, you should see the following output representing the
Linux terminal:
Output
[<username>@<VM1> ~]$
) Important
In order to complete this portion of the tutorial, you must have a subscription for
RHEL and the High Availability Add-on. If you are using an image recommended in
the previous section, you do not have to register another subscription.
Connect to each VM node and follow the guide below to enable HA. For more
information, see enable high availability subscription for RHEL.
Tip
It will be easier if you open an SSH session to each of the VMs simultaneously as
the same commands will need to be run on each VM throughout the article.
If you are copying and pasting multiple sudo commands, and are prompted for a
password, the additional commands will not run. Run each command separately.
1. Run the following commands on each VM to open the Pacemaker firewall ports:
Bash
2. Update and install Pacemaker packages on all nodes using the following
commands:
7 Note
sudo reboot
3. Set the password for the default user that is created when installing Pacemaker
packages. Use the same password on all nodes.
Bash
4. Use the following command to open the hosts file and set up host name
resolution. For more information, see Configure AG on configuring the hosts file.
sudo vi /etc/hosts
In the vi editor, enter i to insert text, and on a blank line, add the Private IP of the
corresponding VM. Then add the VM name after a space next to the IP. Each line
should have a separate entry.
Output
<IP1> <VM1>
<IP2> <VM2>
<IP3> <VM3>
) Important
We recommend that you use your Private IP address above. Using the Public
IP address in this configuration will cause the setup to fail and we don't
recommend exposing your VM to external networks.
To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit.
Bash
2. Remove any existing cluster configuration from all nodes. Run the following
command:
Bash
3. On the primary node, run the following commands to set up the cluster.
When running the pcs cluster auth command to authenticate the cluster
nodes, you will be prompted for a password. Enter the password for the
hacluster user created earlier.
RHEL7
Bash
sudo pcs cluster setup --name az-hacluster <VM1> <VM2> <VM3> --token
30000
RHEL8
For RHEL 8, you will need to authenticate the nodes separately. Manually enter in
the username and password for hacluster when prompted.
Bash
4. Run the following command to check that all nodes are online.
Bash
RHEL 7
If all nodes are online, you will see an output similar to the following:
Output
WARNINGS:
Stack: corosync
Last change: Fri Aug 23 18:27:56 2019 by hacluster via crmd on <VM2>
3 nodes configured
0 resources configured
No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
RHEL 8
Output
WARNINGS:
Cluster Summary:
* Stack: corosync
* Last change: Fri Aug 23 18:27:56 2019 by hacluster via crmd on <VM2>
* 3 nodes configured
Node List:
* No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
5. Set expected votes in the live cluster to 3. This command only affects the live
cluster, and does not change the configuration files.
On all nodes, set the expected votes with the following command:
Bash
Check the version of the Azure Fence Agent to ensure that it's updated. Use the
following command:
Bash
Installed Packages
Name : fence-agents-azure-arm
Arch : x86_64
Version : 4.2.1
Release : 11.el7_6.8
Size : 28 k
Repo : installed
URL : https://github.com/ClusterLabs/fence-agents
JSON
"Id": null,
"IsCustom": true,
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"NotActions": [
],
"AssignableScopes": [
"/subscriptions/<subscriptionId>"
Azure CLI
Output
"assignableScopes": [
"/subscriptions/<subscriptionId>"
],
"id":
"/subscriptions/<subscriptionId>/providers/Microsoft.Authorization/roleDefin
itions/<roleNameId>",
"name": "<roleNameId>",
"permissions": [
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"dataActions": [],
"notActions": [],
"notDataActions": []
],
"roleType": "CustomRole",
"type": "Microsoft.Authorization/roleDefinitions"
1. Go to https://portal.azure.com
2. Open the All resources blade
3. Select the virtual machine of the first cluster node
4. Click Access control (IAM)
5. Click Add a role assignment
6. Select the role Linux Fence Agent Role-<username> from the Role list
7. In the Select list, enter the name of the application you created above,
<resourceGroupName>-app
8. Click Save
9. Repeat the steps above for the all cluster node.
Replace the <ApplicationID> with the ID value from your application registration.
Replace the <servicePrincipalPassword> with the value from the client secret.
Replace the <resourceGroupName> with the resource group from your subscription
used for this tutorial.
Replace the <tenantID> and the <subscriptionId> from your Azure Subscription.
Bash
Since we already added a rule to our firewall to allow the HA service ( --add-
service=high-availability ), there's no need to open the following firewall ports on all
nodes: 2224, 3121, 21064, 5405. However, if you are experiencing any type of
connection issues with HA, use the following command to open these ports that are
associated with HA.
Tip
You can optionally add all ports in this tutorial at once to save some time. The ports
that need to be opened are explained in their relative sections below. If you would
like to add all ports now, add the additional ports: 1433 and 5022.
Bash
7 Note
If you have created the VMs with the SQL Server 2019 pre-installed on RHEL8-HA
then you can skip the below steps to install SQL Server and mssql-tools and start
the Configure an Availability Group section after you setup the sa password on all
the VMs by running the command sudo /opt/mssql/bin/mssql-conf set-sa-
password on all VMs.
Use the below section to install SQL Server and mssql-tools on the VMs. You can choose
one of the below samples to install SQL Server 2017 on RHEL 7 or SQL Server 2019 on
RHEL 8. Perform each of these actions on all nodes. For more information, see Install
SQL Server on a Red Hat VM.
Bash
Bash
Bash
RHEL 7
Bash
RHEL 8
Bash
7 Note
source ~/.bashrc
Bash
Output
Active: active (running) since Thu 2019-12-05 17:30:55 UTC; 20min ago
Docs: https://learn.microsoft.com/sql/linux
CGroup: /system.slice/mssql-server.service
├─11612 /opt/mssql/bin/sqlservr
└─11640 /opt/mssql/bin/sqlservr
Create a certificate
We currently don't support AD authentication to the AG endpoint. Therefore, we must
use a certificate for AG endpoint encryption.
1. Connect to all nodes using SQL Server Management Studio (SSMS) or SQL CMD.
Run the following commands to enable an AlwaysOn_health session and create a
master key:
) Important
If you are connecting remotely to your SQL Server instance, you will need to
have port 1433 open on your firewall. You'll also need to allow inbound
connections to port 1433 in your NSG for each VM. For more information, see
Create a security rule for creating an inbound security rule.
SQL
GO
2. Connect to the primary replica using SSMS or SQL CMD. The below commands will
create a certificate at /var/opt/mssql/data/dbm_certificate.cer and a private key
at var/opt/mssql/data/dbm_certificate.pvk on your primary SQL Server replica:
SQL
TO FILE = '/var/opt/mssql/data/dbm_certificate.cer'
FILE = '/var/opt/mssql/data/dbm_certificate.pvk',
);
GO
Exit the SQL CMD session by running the exit command, and return back to your SSH
session.
On the primary server, run the following scp command to copy the certificate to
the target servers:
Replace <username> and <VM2> with the user name and target VM name that
you are using.
Run this command for all secondary replicas.
7 Note
You don't have to run sudo -i , which gives you the root environment. You
could just run the sudo command in front of each command as we previously
did in this tutorial.
Bash
# The below command allows you to run commands in the root environment
sudo -i
Bash
scp /var/opt/mssql/data/dbm_certificate.*
<username>@<VM2>:/home/<username>
Bash
sudo -i
mv /home/<username>/dbm_certificate.* /var/opt/mssql/data/
cd /var/opt/mssql/data
3. The following Transact-SQL script creates a certificate from the backup that you
created on the primary SQL Server replica. Update the script with strong
passwords. The decryption password is the same password that you used to create
the .pvk file in the previous step. To create the certificate, run the following script
using SQL CMD or SSMS on all secondary servers:
SQL
FILE = '/var/opt/mssql/data/dbm_certificate.pvk',
);
GO
SQL
FOR DATABASE_MIRRORING (
ROLE = ALL,
);
GO
GO
SQL
FOR REPLICA ON
N'<VM1>'
WITH (
ENDPOINT_URL = N'tcp://<VM1>:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC
),
N'<VM2>'
WITH (
ENDPOINT_URL = N'tcp://<VM2>:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC
),
N'<VM3>'
WITH(
ENDPOINT_URL = N'tcp://<VM3>:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC
);
GO
GO
SQL
USE [master]
GO
GO
GO
On all SQL Server instances, save the credentials used for the SQL Server login.
Bash
sudo vi /var/opt/mssql/secrets/passwd
Bash
pacemakerLogin
<password>
To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit.
Bash
Bash
2. On your secondary replicas, run the following commands to join them to the AG:
SQL
GO
GO
3. Run the following Transact-SQL script on the primary replica and each secondary
replica:
SQL
GO
GO
4. Once the secondary replicas are joined, you can see them in SSMS Object Explorer
by expanding the Always On High Availability node:
Add a database to the availability group
We will follow the configure availability group article on adding a database.
The following Transact-SQL commands are used in this step. Run these commands on
the primary replica:
SQL
GO
ALTER DATABASE [db1] SET RECOVERY FULL; -- set the database in full recovery
mode
GO
TO DISK = N'/var/opt/mssql/data/db1.bak';
GO
ALTER AVAILABILITY GROUP [ag1] ADD DATABASE [db1]; -- adds the database db1
to the AG
GO
GO
If the synchronization_state_desc lists SYNCHRONIZED for db1 , this means the replicas
are synchronized. The secondaries are showing db1 in the primary replica.
7 Note
This article contains references to the term slave, a term that Microsoft no longer
uses. When the term is removed from the software, we'll remove it from this article.
RHEL 7
Bash
RHEL 8
Bash
2. Check your resource and ensure that they are online before proceeding using the
following command:
Bash
RHEL 7
Output
Masters: [ <VM1> ]
RHEL 8
Output
Bash
# The above will scan for all IP addresses that are already occupied in
the 10.0.0.x space.
Bash
Bash
Add constraints
1. To ensure that the IP address and the AG resource are running on the same node,
a colocation constraint must be configured. Run the following command:
RHEL 7
Bash
RHEL 8
Bash
RHEL 7
Bash
RHEL 8
Bash
Bash
RHEL 7
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
RHEL 8
Output
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
Re-enable stonith
We're ready for testing. Re-enable stonith in the cluster by running the following
command on Node 1:
Bash
Output
Stack: corosync
Last change: Sat Dec 7 00:18:02 2019 by root via cibadmin on VM1
3 nodes configured
5 resources configured
Masters: [ <VM2> ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Test failover
To ensure that the configuration has succeeded so far, we will test a failover. For more
information, see Always On availability group failover on Linux.
1. Run the following command to manually fail over the primary replica to <VM2> .
Replace <VM2> with the value of your server name.
RHEL 7
Bash
RHEL 8
Bash
sudo pcs resource move ag_cluster-clone <VM2> --master
You can also specify an additional option so that the temporary constraint that's
created to move the resource to a desired node is disabled automatically, and you
do not have to perform steps 2 and 3 below.
RHEL 7
Bash
RHEL 8
Bash
Another alternative to automate steps 2 and 3 below which clear the temporary
constraint in the resource move command itself is by combining multiple
commands in a single line.
RHEL 7
Bash
sudo pcs resource move ag_cluster-master <VM2> --master && sleep 30 &&
pcs resource clear ag_cluster-master
RHEL 8
Bash
sudo pcs resource move ag_cluster-clone <VM2> --master && sleep 30 &&
pcs resource clear ag_cluster-clone
2. If you check your constraints again, you'll see that another constraint was added
because of the manual failover:
RHEL 7
Output
[<username>@VM1 ~]$ sudo pcs constraint list --full
Location Constraints:
Resource: ag_cluster-master
Enabled on: VM2 (score:INFINITY) (role: Master) (id:cli-prefer-
ag_cluster-master)
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
RHEL 8
Output
Location Constraints:
Resource: ag_cluster-master
Enabled on: VM2 (score:INFINITY) (role: Master) (id:cli-prefer-
ag_cluster-clone)
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
RHEL 7
Bash
RHEL 8
Bash
4. Check your cluster resources using the command sudo pcs resource , and you
should see that the primary instance is now <VM2> .
Output
Masters: [ <VM2> ]
Slaves: [ <VM3> ]
Masters: [ <VM2> ]
Test fencing
You can test STONITH by running the following command. Try running the below
command from <VM1> for <VM3> .
Bash
7 Note
By default, the fence action brings the node off and then on. If you only want to
bring the node offline, use the option --off in the command.
Output
Return Value: 0
For more information on testing a fence device, see the following Red Hat article.
Next steps
In order to utilize an availability group listener for your SQL Server instances, you will
need to create and configure a load balancer.
Tutorial: Configure an availability group listener for SQL Server on RHEL virtual
machines in Azure
Tutorial: Configure availability groups
for SQL Server on SLES virtual machines
in Azure
Article • 03/10/2023
Applies to:
SQL Server on Azure VM
7 Note
We use SQL Server 2022 (16.x) with SUSE Linux Enterprise Server (SLES) v15 in this
tutorial, but it is possible to use SQL Server 2019 (15.x) with SLES v12 or SLES v15,
to configure high availability.
" Create a new resource group, availability set, and Linux virtual machines (VMs)
" Enable high availability (HA)
" Create a Pacemaker cluster
" Configure a fencing agent by creating a STONITH device
" Install SQL Server and mssql-tools on SLES
" Configure SQL Server Always On availability group
" Configure availability group (AG) resources in the Pacemaker cluster
" Test a failover and the fencing agent
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see
Quickstart for Bash in Azure Cloud Shell.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're
running on Windows or macOS, consider running Azure CLI in a Docker container.
For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login
command. To finish the authentication process, follow the steps displayed in
your terminal. For other sign-in options, see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more
information about extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To
upgrade to the latest version, run az upgrade.
This article requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud
Shell, the latest version is already installed.
Azure CLI
Azure CLI
az vm availability-set create \
--resource-group <resourceGroupName> \
--name <availabilitySetName> \
--platform-fault-domain-count 2 \
--platform-update-domain-count 2
You should get the following results once the command completes:
Output
{
"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/availabilitySets/<availabilitySetName>",
"location": "eastus2",
"name": "<availabilitySetName>",
"platformFaultDomainCount": 2,
"platformUpdateDomainCount": 2,
"proximityPlacementGroup": null,
"resourceGroup": "<resourceGroupName>",
"sku": {
"capacity": null,
"name": "Aligned",
"tier": null
},
"statuses": null,
"tags": {},
"type": "Microsoft.Compute/availabilitySets",
"virtualMachines": []
<resourceGroupName>
<vNetName>
<subnetName>
Azure CLI
--resource-group <resourceGroupName> \
--name <vNetName> \
--address-prefix 10.1.0.0/16 \
--subnet-name <subnetName> \
--subnet-prefix 10.1.1.0/24
The previous command creates a VNet and a subnet containing a custom IP range.
Azure CLI
# if you want to search the basic offers you could search using the
command below
You should see the following results when you search for the BYOS images:
Output
"offer": "sles-15-sp3-byos",
"publisher": "SUSE",
"sku": "gen1",
"urn": "SUSE:sles-15-sp3-byos:gen1:2022.05.05",
"version": "2022.05.05"
},
"offer": "sles-15-sp3-byos",
"publisher": "SUSE",
"sku": "gen1",
"urn": "SUSE:sles-15-sp3-byos:gen1:2022.07.19",
"version": "2022.07.19"
},
"offer": "sles-15-sp3-byos",
"publisher": "SUSE",
"sku": "gen1",
"urn": "SUSE:sles-15-sp3-byos:gen1:2022.11.10",
"version": "2022.11.10"
},
"offer": "sles-15-sp3-byos",
"publisher": "SUSE",
"sku": "gen2",
"urn": "SUSE:sles-15-sp3-byos:gen2:2022.05.05",
"version": "2022.05.05"
},
"offer": "sles-15-sp3-byos",
"publisher": "SUSE",
"sku": "gen2",
"urn": "SUSE:sles-15-sp3-byos:gen2:2022.07.19",
"version": "2022.07.19"
},
"offer": "sles-15-sp3-byos",
"publisher": "SUSE",
"sku": "gen2",
"urn": "SUSE:sles-15-sp3-byos:gen2:2022.11.10",
"version": "2022.11.10"
) Important
2. Create three VMs in the availability set. Replace these values in the following
command:
<resourceGroupName>
<VM-basename>
<availabilitySetName>
<adminPassword>
<vNetName>
<subnetName>
Azure CLI
az vm create \
--resource-group <resourceGroupName> \
--name <VM-basename>$i \
--availability-set <availabilitySetName> \
--size "<VM-Size>" \
--os-disk-size-gb 128 \
--image "SUSE:sles-15-sp3-byos:gen1:2022.11.10" \
--admin-username "<username>" \
--admin-password "<adminPassword>" \
--authentication-type all \
--generate-ssh-keys \
--vnet-name "<vNetName>" \
--subnet "<subnetName>" \
--public-ip-sku Standard \
--public-ip-address ""
done
The previous command creates the VMs using the previously defined VNet. For more
information on the different configurations, see the az vm create article.
You should get results similar to the following once the command completes for each
VM:
Output
"fqdns": "",
"id":
"/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/provider
s/Microsoft.Compute/virtualMachines/sles1",
"location": "westus",
"privateIpAddress": "<IP1>",
"resourceGroup": "<resourceGroupName>",
"zones": ""
Azure CLI
ssh <username>@<publicIPAddress>
If the connection is successful, you should see the following output representing the
Linux terminal:
Output
[<username>@sles1 ~]$
It is easier to open an SSH session on each of the VMs (nodes) simultaneously, as the
same commands must be run on each VM throughout the article.
If you're copying and pasting multiple sudo commands and are prompted for a
password, the additional commands won't run. Run each command separately.
<subscriptionEmailAddress>
<registrationCode>
Bash
sudo SUSEConnect
--url=https://scc.suse.com
-e <subscriptionEmailAddress> \
-r <registrationCode>
Bash
Bash
During this step, you may be prompted to overwrite an existing SSH file. You must agree
to this prompt. You don't need to enter a passphrase.
In the following command, the <username> account can be the same account you
configured for each node when creating the VM. You can also use the root account, but
this isn't recommended in a production environment.
Bash
In this example, we are connecting to the second and third nodes from the first VM
( sles1 ). Once again the <username> account can be the same account you configured
for each node when creating the VM
Bash
ssh <username>@sles2
ssh <username>@sles3
Repeat this process from all three nodes, so that each node can communicate with the
others without requiring passwords.
For more information about DNS and Active Directory, see Join SQL Server on a Linux
host to an Active Directory domain.
) Important
We recommend that you use your private IP address in the previous example.
Using the public IP address in this configuration will cause the setup to fail, and
would expose your VM to external networks.
The VMs and their IP address used in this example are listed as follows:
sles1 : 10.0.0.85
sles2 : 10.0.0.86
sles3 : 10.0.0.87
Cluster installation
1. Run the following command to install the ha-cluster-bootstrap package on node
1, and then restart the node. In this example, it is the sles1 VM.
Bash
sudo zypper install ha-cluster-bootstrap
After the node is restarted, run the following command to deploy the cluster:
Bash
Output
The user 'hacluster' will have the login shell configuration changed
to /bin/bash
Continue (y/n)? y
Configuring csync2
This will configure the cluster messaging layer. You will need
is eth0's network, but you can use the network address of any
active interface).
Configure SBD:
are a good choice. Note that all data on the partition you
https://10.0.0.85:7630/
Virtual IP []10.0.0.89
Configure Qdevice/Qnetd:
2. Check the status of the cluster on node 1 using the following command:
Bash
Output
1 node configured
3. On all nodes, change the password for hacluster to something more secure using
the following command. You must also change your root user password:
Bash
Bash
4. Run the following command on node 2 and node 3 to first install the crmsh
package:
Bash
Bash
Output
You will be asked for the IP address of an existing node, from which
passwordless ssh between nodes, you will be prompted for the root
root@10.0.0.85's password:
Configuring csync2...done
Merging known_hosts
WARNING: scp to sles2 failed (Exited with error code 1, Error output:
The authenticity of host 'sles2 (10.1.1.5)' can't be established.
lost connection
https://10.0.0.86:7630/
5. Once you've joined all machines to the cluster, check your resource to see if all
VMs are online:
Bash
Output
Stack: corosync
Last change: Mon Mar 6 17:10:09 2023 by root via cibadmin on sles1
3 nodes configured
6. Install the cluster resource component. Run the following command on all nodes.
Bash
7. Install the azure-lb component. Run the following command on all nodes.
Bash
8. Configure the operating system. Go through the following steps on all nodes.
Bash
sudo vi /etc/systemd/system.conf
ini
#DefaultTasksMax=512
DefaultTasksMax=4096
Bash
Bash
9. Reduce the size of the dirty cache. Go through the following steps on all nodes.
Bash
sudo vi /etc/sysctl.conf
ini
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800
10. Install the Azure Python SDK on all nodes with the following commands:
Bash
# You might need to activate the public cloud extension first. In this
example, the SUSEConnect command is for SLES 15 SP1
SUSEConnect -p sle-module-public-cloud/15.1/x86_64
Check the version of the Azure fence agent to ensure that it's updated. Use the
following command:
Bash
Output
----------------------------------------
Repository : SLE-Product-HA15-SP3-Updates
Name : resource-agents
Version : 4.8.0+git30.d0077df0-150300.8.37.1
Arch : x86_64
Status : up-to-date
Replace <username> with a name of your choice. This is to avoid any duplication
when creating this role definition.
Replace <subscriptionId> with your Azure Subscription ID.
JSON
"Id": null,
"IsCustom": true,
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"NotActions": [
],
"AssignableScopes": [
"/subscriptions/<subscriptionId>"
Bash
az role definition create --role-definition "<filename>.json"
Output
"assignableScopes": [
"/subscriptions/<subscriptionId>"
],
"id":
"/subscriptions/<subscriptionId>/providers/Microsoft.Authorization/roleDefin
itions/<roleNameId>",
"name": "<roleNameId>",
"permissions": [
"actions": [
"Microsoft.Compute/*/read",
"Microsoft.Compute/virtualMachines/powerOff/action",
"Microsoft.Compute/virtualMachines/start/action"
],
"dataActions": [],
"notActions": [],
"notDataActions": []
],
"roleType": "CustomRole",
"type": "Microsoft.Authorization/roleDefinitions"
2 Warning
1. Go to https://portal.azure.com
2. Open the All resources pane
3. Select the virtual machine of the first cluster node
4. Select Access control (IAM)
5. Select Add role assignments
6. Select the role Linux Fence Agent Role-<username> from the Role list
7. Leave Assign access to as the default Users, group, or service principal .
8. In the Select list, enter the name of the application you created previously, for
example <resourceGroupName>-app .
9. Select Save.
Bash
3. In the crm prompt, run the following command to configure the resource
properties, which creates the resource called rsc_st_azure as shown in the
following example:
Bash
commit
quit
Bash
sudo crm configure property stonith-timeout=900
5. Check the status of your cluster to see that STONITH has been enabled:
Bash
Output
Stack: corosync
Last change: Mon Mar 6 18:10:09 2023 by root via cibadmin on sles1
3 nodes configured
1. Download the Microsoft SQL Server 2019 SLES repository configuration file:
Bash
sudo zypper addrepo -fc
https://packages.microsoft.com/config/sles/15/mssql-server-2022.repo
Bash
To ensure that the Microsoft package signing key is installed on your system, use
the following command to import the key:
Bash
Bash
4. After the package installation finishes, run mssql-conf setup and follow the
prompts to set the SA password and choose your edition.
Bash
7 Note
Make sure to specify a strong password for the SA account (Minimum length
8 characters, including uppercase and lowercase letters, base 10 digits and/or
non-alphanumeric symbols).
Bash
Bash
Bash
3. Install mssql-tools with the unixODBC developer package. For more information,
see Install the Microsoft ODBC driver for SQL Server (Linux).
Bash
Bash
source ~/.bashrc
Bash
Bash
Bash
Bash
Create a certificate
Microsoft doesn't support Active Directory authentication to the AG endpoint.
Therefore, you must use a certificate for AG endpoint encryption.
1. Connect to all nodes using SQL Server Management Studio (SSMS) or sqlcmd. Run
the following commands to enable an AlwaysOn_health session and create a
master key:
) Important
If you are connecting remotely to your SQL Server instance, you will need to
have port 1433 open on your firewall. You'll also need to allow inbound
connections to port 1433 in your NSG for each VM. For more information, see
Create a security rule for creating an inbound security rule.
SQL
GO
GO
2. Connect to the primary replica using SSMS or sqlcmd. The below commands
create a certificate at /var/opt/mssql/data/dbm_certificate.cer and a private key
at var/opt/mssql/data/dbm_certificate.pvk on your primary SQL Server replica:
SQL
GO
FILE = '/var/opt/mssql/data/dbm_certificate.pvk',
);
GO
Exit the sqlcmd session by running the exit command, and return back to your SSH
session.
Replace <username> and sles2 with the user name and target VM name that
you're using.
Run this command for all secondary replicas.
7 Note
You don't have to run sudo -i , which gives you the root environment. You
can run the sudo command in front of each command instead.
Bash
# The below command allows you to run commands in the root environment
sudo -i
Bash
scp /var/opt/mssql/data/dbm_certificate.*
<username>@sles2:/home/<username>
Bash
sudo -i
mv /home/<username>/dbm_certificate.* /var/opt/mssql/data/
cd /var/opt/mssql/data
3. The following Transact-SQL script creates a certificate from the backup that you
created on the primary SQL Server replica. Update the script with strong
passwords. The decryption password is the same password that you used to create
the .pvk file in the previous step. To create the certificate, run the following script
using sqlcmd or SSMS on all secondary servers:
SQL
FILE = '/var/opt/mssql/data/dbm_certificate.pvk',
);
GO
SQL
FOR DATABASE_MIRRORING (
ROLE = ALL,
);
GO
GO
SQL
CREATE AVAILABILITY
GROUP [ag1]
WITH (
DB_FAILOVER = ON,
CLUSTER_TYPE = EXTERNAL
FOR REPLICA
ON N'sles1'
WITH (
ENDPOINT_URL = N'tcp://sles1:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC
),
N'sles2'
WITH (
ENDPOINT_URL = N'tcp://sles2:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC
),
N'sles3'
WITH (
ENDPOINT_URL = N'tcp://sles3:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = EXTERNAL,
SEEDING_MODE = AUTOMATIC
);
GO
GO
SQL
USE [master]
GO
GO
GO
On all SQL Server instances, save the credentials used for the SQL Server login.
sudo vi /var/opt/mssql/secrets/passwd
Bash
pacemakerLogin
<password>
To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit.
Bash
SQL
GO
GO
2. Run the following Transact-SQL script on the primary replica and each secondary
replica:
SQL
GO
GO
3. Once the secondary replicas are joined, you can see them in SSMS Object Explorer
by expanding the Always On High Availability node:
The following Transact-SQL commands are used in this step. Run these commands on
the primary replica:
SQL
GO
ALTER DATABASE [db1] SET RECOVERY FULL; -- set the database in full recovery
mode
GO
TO DISK = N'/var/opt/mssql/data/db1.bak';
GO
ALTER AVAILABILITY GROUP [ag1] ADD DATABASE [db1]; -- adds the database db1
to the AG
GO
SQL
GO
synchronization_state_desc
FROM sys.dm_hadr_database_replica_states;
GO
If the synchronization_state_desc lists SYNCHRONIZED for db1 , this means the replicas
are synchronized. The secondaries are showing db1 in the primary replica.
7 Note
Bias-free communication
This article contains references to the term slave, a term Microsoft considers
offensive when used in this context. The term appears in this article because it
currently appears in the software. When the term is removed from the software, we
will remove it from the article.
This article references the guide to create the availability group resources in a
Pacemaker cluster.
Enable Pacemaker
Enable Pacemaker so that it automatically starts.
Bash
sudo systemctl enable pacemaker
Bash
2. In the crm prompt, run the following command to configure the resource
properties. The following commands create the resource ag_cluster in the
availability group ag1 .
Bash
commit
quit
Tip
3. Set the co-location constraint for the virtual IP, to run on the same node as the
primary node:
Bash
commit
quit
4. Add the ordering constraint, to prevent the IP address from temporarily pointing
to the node with the pre-failover secondary. Run the following command to create
ordering constraint:
Bash
commit
quit
Bash
Output
Cluster Summary:
Stack: corosync
Last change: Mon Mar 6 18:38:09 2023 by root via cibadmin on sles1
3 nodes configured
Node List:
Masters: [ sles1 ]
Bash
Output
node 1: sles1
node 2: sles2
node 3: sles3
params ip=10.0.0.93 \
params ag_name=ag1 \
meta failure-timeout=60s \
ms ms-ag_cluster ag_cluster \
property cib-bootstrap-options: \
have-watchdog=false \
dc-version="2.0.5+20201202.ba59be712-150300.4.30.3-
2.0.5+20201202.ba59be712" \
cluster-infrastructure=corosync \
cluster-name=sqlcluster \
stonith-enabled=true \
concurrent-fencing=true \
stonith-timeout=900
rsc_defaults rsc-options: \
resource-stickiness=1 \
migration-threshold=3
op_defaults op-options: \
timeout=600 \
record-pending=true
Test failover
To ensure that the configuration has succeeded so far, test a failover. For more
information, see Always On availability group failover on Linux.
1. Run the following command to manually fail over the primary replica to sles2 .
Replace sles2 with the value of your server name.
Bash
Output
Bash
Output
Cluster Summary:
Stack: corosync
3 nodes configured
Node List:
3. After some time, the sles2 VM is now the primary, and the other two VMs are
secondaries. Run sudo crm status once again, and review the output, which is
similar to the following example:
Output
Cluster Summary:
Stack: corosync
Last change: Mon Mar 6 18:42:59 2023 by root via cibadmin on sles1
3 nodes configured
Node List:
Masters: [ sles2 ]
4. Check your constraints again, using crm config show . Observe that another
constraint was added because of the manual failover.
Bash
crm configure
delete cli-prefer-ms-ag_cluster
commit
Test fencing
You can test STONITH by running the following command. Try running the below
command from sles1 for sles3 .
Bash
See also
Tutorial: Configure an availability group listener for SQL Server on RHEL virtual
machines in Azure
Tutorial: Configure an availability group
listener for SQL Server on RHEL virtual
machines in Azure
Article • 11/04/2022
Applies to:
SQL Server on Azure VM
7 Note
We use SQL Server 2017 with RHEL 7.6 in this tutorial, but it is possible to use SQL
Server 2019 in RHEL 7 or RHEL 8 to configure high availability. The commands to
configure availability group resources has changed in RHEL 8, and you'll want to
look at the article Create availability group resource and RHEL 8 resources for
more information on the correct commands.
This tutorial will go over steps on how to create an availability group listener for your
SQL Servers on RHEL virtual machines (VMs) in Azure. You will learn how to:
Prerequisite
Completed Tutorial: Configure availability groups for SQL Server on RHEL virtual
machines in Azure
3. Search for load balancer and then, in the search results, select Load Balancer,
which is published by Microsoft.
5. In the Create load balancer dialog box, configure the load balancer as follows:
Setting Value
Name A text name representing the load balancer. For example, sqlLB.
Type Internal
Virtual network The default virtual network that was created should be named
VM1VNET.
Subnet Select the subnet that the SQL Server instances are in. The default
should be VM1Subnet.
IP address Static
assignment
Private IP Use the virtualip IP address that was created in the cluster.
address
Subscription Use the subscription that was used for your resource group.
Resource group Select the resource group that the SQL Server instances are in.
Location Select the Azure location that the SQL Server instances are in.
1. In your resource group, click the load balancer that you created.
4. On Add backend pool, under Name, type a name for the back-end pool.
6. Select each virtual machine in the environment, and associate the appropriate IP
address to each selection.
7. Click Add.
Create a probe
The probe defines how Azure verifies which of the SQL Server instances currently owns
the availability group listener. Azure probes the service based on the IP address on a
port that you define when you create the probe.
3. Configure the probe on the Add probe blade. Use the following values to
configure the probe:
Setting Value
Protocol TCP
Port You can use any available port. For example, 59999.
Interval 5
Unhealthy 2
threshold
4. Click OK.
5. Log in to all your virtual machines, and open the probe port using the following
commands:
Bash
Azure creates the probe and then uses it to test which SQL Server instance has the
listener for the availability group.
3. On the Add load balancing rules blade, configure the load-balancing rule. Use the
following settings:
Setting Value
Protocol TCP
Port 1433
Backend port 1433. This value is ignored because this rule uses Floating IP
(direct server return).
Probe Use the name of the probe that you created for this load balancer.
Idle timeout 4
(minutes)
5. Azure configures the load-balancing rule. Now the load balancer is configured to
route traffic to the SQL Server instance that hosts the listener for the availability
group.
At this point, the resource group has a load balancer that connects to all SQL Server
machines. The load balancer also contains an IP address for the SQL Server Always On
availability group listener, so that any machine can respond to requests for the
availability groups.
Create the load balancer resource in the cluster
1. Log in to the primary virtual machine. We need to create the resource to enable
the Azure load balancer probe port (59999 is used in our example). Run the
following command:
Bash
Bash
Add constraints
1. A colocation constraint must be configured to ensure the Azure load balancer IP
address and the AG resource are running on the same node. Run the following
command:
Bash
Bash
Bash
Output
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
SQL
ALTER AVAILABILITY
,PORT = 1433
);
GO
2. Log in to each VM node. Use the following command to open the hosts file and
set up host name resolution for the ag1-listener on each machine.
sudo vi /etc/hosts
In the vi editor, enter i to insert text, and on a blank line, add the IP of the ag1-
listener . Then add ag1-listener after a space next to the IP.
Output
<IP of ag1-listener> ag1-listener
To exit the vi editor, first hit the Esc key, and then enter the command :wq to write
the file and quit. Do this on each node.
Use a login that was previously created and replace <YourPassword> with the
correct password. The example below uses the sa login that was created with
the SQL Server.
Bash
2. Check the name of the server that you are connected to. Run the following
command in SQLCMD:
SQL
SELECT @@SERVERNAME
Your output should show the current primary node. This should be VM1 if you have
never tested a failover.
Test a failover
1. Run the following command to manually fail over the primary replica to <VM2> or
another replica. Replace <VM2> with the value of your server name.
Bash
sudo pcs resource move ag_cluster-master <VM2> --master
2. If you check your constraints, you'll see that another constraint was added because
of the manual failover:
Bash
Bash
4. Check your cluster resources using the command sudo pcs resource , and you
should see that the primary instance is now <VM2> .
7 Note
This article contains references to the term slave, a term that Microsoft no
longer uses. When the term is removed from the software, we'll remove it
from this article.
Output
Masters: [ <VM2> ]
5. Use SQLCMD to log in to your primary replica using the listener name:
Use a login that was previously created and replace <YourPassword> with the
correct password. The example below uses the sa login that was created with
the SQL Server.
Bash
6. Check the server that you are connected to. Run the following command in
SQLCMD:
SQL
SELECT @@SERVERNAME
You should see that you are now connected to the VM that you failed-over to.
Next steps
For more information on load balancers in Azure, see:
Configure a load balance for an availability group on SQL Server on Azure VMs
Tutorial: Set up a three node Always On
availability group with DH2i
DxEnterprise
Article • 02/13/2023
Applies to:
SQL Server on Azure VM
This tutorial explains how to configure an SQL Server Always On availability group with
DH2i DxEnterprise running on Linux-based Azure Virtual Machines (VMs).
7 Note
Microsoft supports data movement, availability groups, and the SQL Server
components. Contact DH2i for support related to the documentation of DH2i
DxEnterprise cluster, for the cluster and quorum management.
" Install SQL Server on all virtual machines that will be part of the availability group.
" Install DxEnterprise on all the virtual machines and configure the DxEnterprise
cluster.
" Create the virtual hosts to provide failover support and high availability and add an
availability group and database to the availability group.
" Create the internal Azure Load Balancer for availability group listener (optional).
" Perform a manual or automatic failover.
Prerequisites
Create four virtual machines in Azure. Follow the Quickstart: Create Linux virtual
machine in Azure portal article to create Linux based virtual machines. Similarly, for
creating the Windows based virtual machine, follow the Quickstart: Create a
Windows virtual machine in the Azure portal article.
Install .NET 3.1 on all the Linux-based VMs that are going to be part of the cluster.
For instructions for the Linux operating system that you choose, see Install .NET on
Linux distributions.
A valid DxEnterprise license with availability group management features enabled
is required. For more information, see DxEnterprise Free Trial for a free trial.
7 Note
Ensure that the Linux OS that you choose is a common distribution that is
supported by both DH2i DxEnterprise, Minimal System Requirements and
Microsoft SQL Server.
This tutorial uses Ubuntu 18.04, which is supported by both DH2i DxEnterprise and
Microsoft SQL Server.
For this tutorial, don't install SQL Server on the Windows VM, because this node isn't
going to be part of the cluster, and is used only to manage the cluster using DxAdmin.
After you complete this step, you should have SQL Server and SQL Server tools
(optionally) installed on all three Linux-based VMs that participate in the availability
group.
1 DxAdmin Client NA
To install DxEnterprise on the three Linux-based nodes, follow the DH2i DxEnterprise
documentation based on the Linux operating system you choose. Install DxEnterprise
using any one of the methods listed below.
Ubuntu
Repo Installation Quick Start Guide
Extension Quick Start Guide
Marketplace Image Quick Start Guide
RHEL
Repo Installation Quick Start Guide
Extension Quick Start Guide
Marketplace Image Quick Start Guide
To install just the DxAdmin client tool on the Windows VM, follow DxAdmin Client UI
Quick Start Guide .
After this step, you should have the DxEnterprise cluster created on the Linux VMs, and
DxAdmin client installed on the Windows Client machine.
7 Note
You can also create a three node cluster where one of the node is added as
configuration-only mode to enable automatic failover. For more information, see
Supported Availability Modes.
7 Note
During this step, the SQL Server instances are restarted to enable availability
groups.
Connect to the Windows client machine running DxAdmin to connect to the cluster
created in the step above. Follow the steps documented at MSSQL Availability Groups
with DxAdmin to enable Always On and create the virtual host and availability group.
Tip
Before adding the databases, ensure the database is created and backed up on the
primary instance of SQL Server.
After this step, you should have an availability group listener created and mapped to the
internal load balancer.
The cluster manager promotes one of the secondary replicas in the availability
group to primary.
The failed primary replica automatically joins the cluster after comes back up. The
cluster manager promotes it to secondary replica.
You could also perform a manual failover by following the following steps:
1. Connect to the cluster by using DxAdmin.
2. Expand the virtual host for the availability group.
3. Right-click on the target node/secondary replica and select Start Hosting on
Member to initiate the failover.
For more information on more operations within DxEnterprise, See DxEnterprise Admin
Guide and DxEnterprise DxCLI Guide .
Next Steps
Learn more about Availability Groups on Linux
Quickstart: Create Linux virtual machine in Azure portal
Quickstart: Create a Windows virtual machine in the Azure portal
Supported platforms for SQL Server 2019 on Linux
Frequently asked questions for
SQL Server on Linux virtual
machines
FAQ
This article provides answers to some of the most common questions about running
SQL Server on Linux virtual machines.
If your Azure issue is not addressed in this article, visit the Azure forums on Microsoft Q
& A and Stack Overflow . You can post your issue in these forums, or post to
@AzureSupport on Twitter . You also can submit an Azure support request. To submit a
support request, on the Azure support page, select Get support.
Images
What SQL Server virtual machine gallery images
are available?
Azure maintains virtual machine (VM) images for all supported major releases of SQL
Server on all editions for both Linux and Windows. For more details, see the complete
list of Linux VM images and Windows VM images.
Creation
How do I create a Linux virtual machine with SQL
Server?
The easiest solution is to create a Linux virtual machine that includes SQL Server. For a
tutorial on signing up for Azure and creating a SQL Server VM from the portal, see
Provision a Linux virtual machine running SQL Server in the Azure portal. You also have
the option of manually installing SQL Server on a VM with either a freely licensed edition
(Developer or Express) or by reusing an on-premises license. If you bring your own
license, you must have License Mobility through Software Assurance on Azure .
Licensing
How can I install my licensed copy of SQL Server
on an Azure VM?
First, create a Linux OS-only virtual machine. Then run the SQL Server installation steps
for your Linux distribution. Unless you are installing one of the freely licensed editions of
SQL Server, you must also have a SQL Server license and License Mobility through
Software Assurance on Azure .
Administration
Can I manage a Linux virtual machine running
SQL Server with SQL Server Management Studio
(SSMS)?
Yes, but SSMS is currently a Windows-only tool. You must connect remotely from a
Windows machine to use SSMS with Linux VMs running SQL Server. Locally on Linux, the
new mssql-conf tool can perform many administrative tasks. For a cross-platform
database management tool, see Azure Data Studio.
General
Are SQL Server high-availability solutions
supported on Azure VMs?
Not at this time. Always On availability groups and Failover Clustering both require a
clustering solution in Linux, such as Pacemaker. The supported Linux distributions for
SQL Server do not support their high availability add-ons in the cloud.
Resources
Linux VMs:
Windows VMs:
Overview of SQL Server on a Windows VM
Provision SQL Server on a Windows VM
FAQ (Windows)
SQL Server on Linux
Article • 03/31/2023
Applies to:
SQL Server - Linux
SQL Server 2022 (16.x) runs on Linux. It's the same SQL Server database engine, with
many similar features and services regardless of your operating system. To find out more
about this release, see What's new in SQL Server 2022.
Install
To get started, install SQL Server on Linux using one of the following quickstarts:
Container images
The SQL Server container images are published and available on the Microsoft Container
Registry (MCR), and also cataloged at the following locations, based on the operating
system image that was used when creating the container image:
For RHEL-based SQL Server container images, see SQL Server Red Hat
Containers .
For Ubuntu-based SQL Server images, see SQL Server on Docker Hub .
7 Note
Containers will only be published to MCR for the most recent Linux distributions. If
you create your own custom SQL Server container image for an older supported
distribution, it will still be supported. For more information, see Upcoming updates
to SQL Server container images on Microsoft Artifact Registry aka (MCR) .
Connect
After installation, connect to the SQL Server instance on your Linux machine. You can
connect locally or remotely and with various tools and drivers. The quickstarts
demonstrate how to use the sqlcmd command-line tool. Other tools include the
following:
Tool Tutorial
Visual Studio Code (VS Code) Use VS Code with SQL Server on Linux
SQL Server Management Studio Use SSMS on Windows to connect to SQL Server on
(SSMS) Linux
SQL Server Data Tools (SSDT) Use SSDT with SQL Server on Linux
Explore
Starting with SQL Server 2017 (14.x), SQL Server has the same underlying database
engine on all supported platforms, including Linux and containers. Therefore, many
existing features and capabilities operate the same way. This area of the documentation
exposes some of these features from a Linux perspective. It also calls out areas that have
unique requirements on Linux.
If you're already familiar with SQL Server on Linux, review the release notes for general
guidelines and known issues for this release:
Tip
For answers to frequently asked questions, see the SQL Server on Linux FAQ.
Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback
Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.
Applies to:
SQL Server
Azure SQL Database
Azure Synapse Analytics
SQL Server Data Tools (SSDT) is a modern development tool for building SQL Server
relational databases, databases in Azure SQL, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, you
can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.
7 Note
To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.
1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.
3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.
For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .
Analysis Services
Integration Services
Reporting Services
Relational databases SQL Server 2016 (13.x) - SQL Server 2022 (16.x)
With Visual Studio 2019, the required functionality to enable Analysis Services,
Integration Services, and Reporting Services projects has moved into the respective
Visual Studio (VSIX) extensions only.
7 Note
1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.
3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.
For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .
Analysis Services
Integration Services
Reporting Services
Offline installation
For scenarios where offline installation is required, such as low bandwidth or isolated
networks, SSDT is available for offline installation. Two approaches are available:
For more details you can follow the Step-by-Step Guidelines for Offline Installation
Previous versions
To download and install SSDT for Visual Studio 2017, or an older version of SSDT, see
Previous releases of SQL Server Data Tools (SSDT and SSDT-BI).
See Also
SSDT MSDN Forum
Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback
Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
Analytics Platform System (PDW)
To manage your database, you need a tool. Whether your databases run in the cloud, on
Windows, on macOS, or on Linux, your tool doesn't need to run on the same platform as
the database.
You can view the links to the different SQL tools in the following tables.
7 Note
Recommended tools
The following tools provide a graphical user interface (GUI).
A light-weight editor that can run on-demand SQL queries, view and Windows
save results as text, JSON, or Excel. Edit data, organize your favorite macOS
Azure Data
Studio
Manage a SQL Server instance or database with full GUI support. Windows
Access, configure, manage, administer, and develop all components
of SQL Server, Azure SQL Database, and Azure Synapse Analytics.
Provides a single comprehensive utility that combines a broad
SQL Server group of graphical tools with a number of rich script editors to
Management provide access to SQL for developers and database administrators
Studio of all skill levels.
(SSMS)
Tool Description Operating
system
The mssql extension for Visual Studio Code is the official SQL Windows
Server extension that supports connections to SQL Server and rich macOS
editing experience for T-SQL in Visual Studio Code. Write T-SQL Linux
scripts in a light-weight editor.
Visual Studio
Code
Command-line tools
The tools below are the main command-line tools.
bcp The bulk copy program utility (bcp) bulk copies data between an Windows
format. Linux
mssql-cli mssql-cli is an interactive command-line tool for querying SQL Server. Windows
(preview) Also, query SQL Server with a command-line tool that features macOS
(preview) Linux
sqlcmd sqlcmd utility lets you enter Transact-SQL statements, system Windows
Linux
Linux
Tool Description Operating
system
SQL Server SQL Server PowerShell provides cmdlets for working with SQL. Windows
PowerShell macOS
Linux
Tool Description
Configuration Use SQL Server Configuration Manager to configure SQL Server services and
Manager configure network connectivity. Configuration Manager runs on Windows
Data Migration The Data Migration Assistant tool helps you upgrade to a modern data
Assistant platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.
Distributed Use the Distributed Replay feature to help you assess the impact of future SQL
Replay Server upgrades. Also use Distributed Replay to help assess the impact of
hardware and operating system upgrades, and SQL Server tuning.
ssbdiagnose The ssbdiagnose utility reports issues in Service Broker conversations or the
configuration of Service Broker services.
SQL Server Use SQL Server Migration Assistant to automate database migration to SQL
Migration Server from Microsoft Access, DB2, MySQL, Oracle, and Sybase.
Assistant
If you're looking for additional tools that aren't mentioned on this page, see SQL
Command Prompt Utilities and Download SQL Server extended features and tools
Migration guide: IBM Db2 to SQL Server
on Azure VM
Article • 08/30/2022
Applies to:
SQL Server on Azure VM
This guide teaches you to migrate your user databases from IBM Db2 to SQL Server on
Azure VM, by using the SQL Server Migration Assistant for Db2.
Prerequisites
To migrate your Db2 database to SQL Server, you need:
Pre-migration
After you have met the prerequisites, you're ready to discover the topology of your
environment and assess the feasibility of your migration.
Assess
Use SSMA for DB2 to review database objects and data, and assess databases for
migration.
3. Provide a project name and a location to save your project. Then select a SQL
Server migration target from the drop-down list, and select OK.
4. On Connect to Db2, enter values for the Db2 connection details.
5. Right-click the Db2 schema you want to migrate, and then choose Create report.
This will generate an HTML report. Alternatively, you can choose Create report
from the navigation bar after selecting the schema.
6. Review the HTML report to understand conversion statistics and any errors or
warnings. You can also open the report in Excel to get an inventory of Db2 objects
and the effort required to perform schema conversions. The default location for
the report is in the report folder within SSMAProjects.
4. You can change the type mapping for each table by selecting the table in the Db2
Metadata Explorer.
Convert schema
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad hoc queries to statements. Right-click the node, and
then choose Add statements.
4. After the conversion finishes, compare and review the structure of the schema to
identify potential problems. Address the problems based on the recommendations.
5. In the Output pane, select Review results. In the Error list pane, review errors.
6. Save the project locally for an offline schema remediation exercise. From the File
menu, select Save Project. This gives you an opportunity to evaluate the source
and target schemas offline, and perform remediation before you can publish the
schema to SQL Server on Azure VM.
Migrate
After you have completed assessing your databases and addressing any discrepancies,
the next step is to execute the migration process.
To publish your schema and migrate your data, follow these steps:
1. Publish the schema. In SQL Server Metadata Explorer, from the Databases node,
right-click the database. Then select Synchronize with Database.
2. Migrate the data. Right-click the database or object you want to migrate in Db2
Metadata Explorer, and choose Migrate data. Alternatively, you can select Migrate
Data from the navigation bar. To migrate data for an entire database, select the
check box next to the database name. To migrate data from individual tables,
expand the database, expand Tables, and then select the check box next to the
table. To omit data from individual tables, clear the check box.
3. Provide connection details for both the Db2 and SQL Server instances.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly
consumed the source need to start consuming the target. Accomplishing this will in
some cases require changes to the applications.
Perform tests
Testing consists of the following activities:
1. Develop validation tests: To test database migration, you need to use SQL queries.
You must create the validation queries to run against both the source and the
target databases. Your validation queries should cover the scope you have defined.
2. Set up the test environment: The test environment should contain a copy of the
source database and the target database. Be sure to isolate the test environment.
3. Run validation tests: Run the validation tests against the source and the target,
and then analyze the results.
4. Run performance tests: Run performance tests against the source and the target,
and then analyze and compare the results.
Migration assets
For additional assistance, see the following resources, which were developed in support
of a real-world migration project engagement:
Asset Description
Data This tool provides suggested "best fit" target platforms, cloud readiness, and
workload application/database remediation level for a given workload. It offers simple, one-
assessment click calculation and report generation that helps to accelerate large estate
model and assessments by providing and automated and uniform target platform decision
tool process.
Asset Description
Db2 zOS After running the SQL script on a database, you can export the results to a file on
data assets the file system. Several file formats are supported, including *.csv, so that you can
discovery capture the results in external tools such as spreadsheets. This method can be
and useful if you want to easily share results with teams that do not have the
assessment workbench installed.
package
IBM Db2 This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables
LUW and provides a count of objects by schema and object type, a rough estimate of
inventory "raw data" in each schema, and the sizing of tables in each schema, with results
scripts and stored in a CSV format.
artifacts
IBM Db2 to The Database Compare utility is a Windows console application that you can use
SQL Server - to verify that the data is identical both on source and target platforms. You can use
Database the tool to efficiently compare data down to the row or column level in all or
Compare selected tables, rows, and columns.
utility
The Data SQL Engineering team developed these resources. This team's core charter is
to unblock and accelerate complex modernization for data platform migration projects
to Microsoft's Azure data platform.
Next steps
After migration, review the Post-migration validation and optimization guide.
For Microsoft and third-party services and tools that are available to assist you with
various database and data migration scenarios, see Data migration services and tools.
Applies to:
Azure SQL Database
This guide teaches you to migrate your Oracle schemas to SQL Server on Azure Virtual
Machines by using SQL Server Migration Assistant for Oracle.
Prerequisites
To migrate your Oracle schema to SQL Server on Azure Virtual Machines, you need:
Pre-migration
To prepare to migrate to the cloud, verify that your source environment is supported
and that you've addressed any prerequisites. Doing so will help to ensure an efficient
and successful migration.
Discover
Use MAP Toolkit to identify existing data sources and details about the features your
business is using. Doing so will give you a better understanding of the migration and
help you plan for it. This process involves scanning the network to identify your
organization's Oracle instances and the versions and features you're using.
To use MAP Toolkit to do an inventory scan, follow these steps:
3. Select Create an inventory database. Enter the name for the new inventory
database and a brief description, and then select OK
4. Select Collect inventory data to open the Inventory and Assessment Wizard:
5. In the Inventory and Assessment Wizard, select Oracle, and then select Next:
6. Select the computer search option that best suits your business needs and
environment, and then select Next:
7. Either enter credentials or create new credentials for the systems that you want to
explore, and then select Next:
8. Set the order of the credentials, and then select Next:
9. Enter the credentials for each computer you want to discover. You can use unique
credentials for every computer/machine, or you can use the All Computers
credential list.
10. Verify your selections, and then select Finish:
11. After the scan finishes, view the Data Collection summary. The scan might take a
few minutes, depending on the number of databases. Select Close when you're
done:
12. Select Options to generate a report about the Oracle assessment and database
details. Select both options, one at a time, to generate the report.
Assess
After you identify the data sources, use SQL Server Migration Assistant for Oracle to
assess the Oracle instances migrating to the SQL Server VM. The assistant will help you
understand the gaps between the source and destination databases. You can review
database objects and data, assess databases for migration, migrate database objects to
SQL Server, and then migrate data to SQL Server.
3. Provide a project name and a location for your project, and then select a SQL
Server migration target from the list. Select OK:
4. Select Connect to Oracle. Enter values for the Oracle connection in the Connect to
Oracle dialog box:
6. Review the HTML report for conversion statistics, errors, and warnings. Analyze it
to understand conversion problems and resolutions.
You can also open the report in Excel to get an inventory of Oracle objects and the
effort required to complete schema conversions. The default location for the report
is the report folder in SSMAProjects .
2_47_55\
1. (Optional) To convert dynamic or ad hoc queries, right-click the node and select
Add statement.
4. After the schema conversion is complete, review the converted objects and
compare them to the original objects to identify potential problems. Use the
recommendations to address any problems:
Compare the converted Transact-SQL text to the original stored procedures and
review the recommendations:
You can save the project locally for an offline schema remediation exercise. To do
so, select Save Project on the File menu. Saving the project locally lets you
evaluate the source and target schemas offline and perform remediation before
you publish the schema to SQL Server.
5. Select Review results in the Output pane, and then review errors in the Error list
pane.
6. Save the project locally for an offline schema remediation exercise. Select Save
Project on the File menu. This gives you an opportunity to evaluate the source and
target schemas offline and perform remediation before you publish the schema to
SQL Server on Azure Virtual Machines.
Migrate
After you have the necessary prerequisites in place and have completed the tasks
associated with the pre-migration stage, you're ready to start the schema and data
migration. Migration involves two steps: publishing the schema and migrating the data.
To publish your schema and migrate the data, follow these steps:
1. Publish the schema: right-click the database in SQL Server Metadata Explorer and
select Synchronize with Database. Doing so publishes the Oracle schema to SQL
Server on Azure Virtual Machines.
Review the mapping between your source project and your target:
2. Migrate the data: right-click the database or object that you want to migrate in
Oracle Metadata Explorer and select Migrate Data. Or, you can select the Migrate
Data tab. To migrate data for an entire database, select the check box next to the
database name. To migrate data from individual tables, expand the database,
expand Tables, and then select the checkboxes next to the tables. To omit data
from individual tables, clear the checkboxes.
3. Provide connection details for Oracle and SQL Server on Azure Virtual Machines in
the dialog box.
Instead of using SSMA, you could use SQL Server Integration Services (SSIS) to migrate
the data. To learn more, see:
The article SQL Server Integration Services.
The white paper SSIS for Azure and Hybrid Data Movement .
Post-migration
After you complete the migration stage, you need to complete a series of post-
migration tasks to ensure that everything is running as smoothly and efficiently as
possible.
Remediate applications
After the data is migrated to the target environment, all the applications that previously
consumed the source need to start consuming the target. Making those changes might
require changes to the applications.
Data Access Migration Toolkit is an extension for Visual Studio Code. It allows you to
analyze your Java source code and detect data access API calls and queries. The toolkit
provides a single-pane view of what needs to be addressed to support the new
database back end. To learn more, see Migrate your Java application from Oracle .
Perform tests
To test your database migration, complete these activities:
1. Develop validation tests. To test database migration, you need to use SQL queries.
Create the validation queries to run against both the source and target databases.
Your validation queries should cover the scope that you've defined.
2. Set up a test environment. The test environment should contain a copy of the
source database and the target database. Be sure to isolate the test environment.
3. Run validation tests. Run the validation tests against the source and the target,
and then analyze the results.
4. Run performance tests. Run performance test against the source and the target,
and then analyze and compare the results.
Description: Enter any additional information to identify the purpose of the test
case.
3. Select the objects that are part of the test case from the Oracle object tree located
on the left side.
In this example, stored procedure ADD_REGION and table REGION are selected.
4. Next, select the tables, foreign keys and other dependent objects from the Oracle
object tree in the left window.
To learn more, see Selecting and configuring affected objects.
5. Review the evaluation sequence of objects. Change the order by selecting the
buttons in the grid.
6. Finalize the test case by reviewing the information provided in the previous steps.
Configure the test execution options based on the test scenario.
For more information on test case settings, Finishing test case preparation
1. Select the test case from test repository and then select run.
5. A real-time progress bar shows the execution status of the test run.
6. Review the report after the test is completed. The report provides the statistics, any
errors during the test run and a detail report.
7. Select details to get more information.
7 Note
For more information about these problems and specific steps to mitigate them,
see the Post-migration validation and optimization guide.
Migration resources
For more help with completing this migration scenario, see the following resources,
which were developed to support a real-world migration project.
Title/Link Description
Data Workload This tool provides suggested best-fit target platforms, cloud readiness, and
Assessment application/database remediation levels for a given workload. It offers simple
Model and one-click calculation and report generation that helps to accelerate large estate
Tool assessments by providing an automated and uniform target-platform decision
process.
Oracle This asset includes a PL/SQL query that targets Oracle system tables and
Inventory Script provides a count of objects by schema type, object type, and status. It also
Artifacts provides a rough estimate of raw data in each schema and the sizing of tables
in each schema, with results stored in a CSV format.
Automate This set of resources uses a .csv file as entry (sources.csv in the project folders)
SSMA Oracle to produce the XML files that you need to run an SSMA assessment in console
Assessment mode. You provide the source.csv file by taking an inventory of existing Oracle
Collection & instances. The output files are AssessmentReportGeneration_source_1.xml,
Consolidation ServersConnectionFile.xml, and VariableValueFile.xml.
SSMA issues With Oracle, you can assign a non-scalar condition in a WHERE clause. SQL
and possible Server doesn't support this type of condition. So SSMA for Oracle doesn't
remedies when convert queries that have a non-scalar condition in the WHERE clause. Instead,
migrating it generates an error: O2SS0001. This white paper provides details on the
Oracle problem and ways to resolve it.
databases
Oracle to SQL This document focuses on the tasks associated with migrating an Oracle
Server schema to the latest version of SQL Server. If the migration requires changes to
Migration features/functionality, you need to carefully consider the possible effect of
Handbook each change on the applications that use the database.
Title/Link Description
Oracle to SQL SSMA for Oracle Tester is the recommended tool to automatically validate the
Server - database object conversion and data migration, and it's a superset of Database
Database Compare functionality.
Compare
utility If you're looking for an alternative data validation option, you can use the
Database Compare utility to compare data down to the row or column level in
all or selected tables, rows, and columns.
The Data SQL Engineering team developed these resources. This team's core charter is
to unblock and accelerate complex modernization for data-platform migration projects
to the Microsoft Azure data platform.
Next steps
To check the availability of services applicable to SQL Server, see the Azure Global
infrastructure center .
For a matrix of the Microsoft and third-party services and tools that are available
to help you with various database and data migration scenarios and specialized
tasks, see Services and tools for data migration.
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices to cost and size workloads migrated to Azure
To assess the application access layer, use Data Access Migration Toolkit Preview .
For details on how to do data access layer A/B testing, see Overview of Database
Experimentation Assistant.
Migration overview: SQL Server to SQL
Server on Azure VMs
Article • 12/26/2022
Applies to:
SQL Server on Azure VM
Learn about the different migration strategies to migrate your SQL Server to SQL Server
on Azure Virtual Machines (VMs).
Overview
Migrate to SQL Server on Azure Virtual Machines (VMs) when you want to use the
familiar SQL Server environment with OS control, and want to take advantage of cloud-
provided features such as built-in VM high availability, automated backups, and
automated patching.
Save on costs by bringing your own license with the Azure Hybrid Benefit licensing
model or extend support for SQL Server 2012 by getting free security updates.
You can use the Azure SQL migration extension for Azure Data Studio to get right-sized
SQL Server on Azure Virtual Machines recommendation. The extension collects
performance data from your source SQL Server instance to provide right-sized Azure
recommendation that meets your workload's performance needs with minimal cost. To
learn more, see Get right-sized Azure recommendation for your on-premises SQL Server
database(s)
To determine the VM size and storage requirements for all your workloads in your data
estate, it's recommended that these are sized through a Performance-Based Azure
Migrate Assessment. If this isn't an available option, see the following article on creating
your own baseline for performance .
Consideration should also be made on the correct installation and configuration of SQL
Server on a VM. It's recommended to use the Azure SQL virtual machine image gallery
as this allows you to create a SQL Server VM with the right version, edition, and
operating system. This will also register the Azure VM with the SQL Server Resource
Provider automatically, enabling features such as Automated Backups and Automated
Patching.
Migration strategies
There are two migration strategies to migrate your user databases to an instance of SQL
Server on Azure VMs:
migrate, and lift and shift.
The appropriate approach for your business typically depends on the following factors:
Lift & Use the lift and shift migration strategy to move Use for single to large-scale
shift the entire physical or virtual SQL Server from its migrations, even applicable to
current location onto an instance of SQL Server scenarios such as data center exit.
Migrate Use a migration strategy when you want to Use when there's a requirement
upgrade the target SQL Server and/or operating or desire to migrate to SQL Server
system version. on Azure Virtual Machines, or if
there's a requirement to upgrade
Select an Azure VM from Azure Marketplace or a legacy SQL Server and/or OS
prepared SQL Server image that matches the versions that are no longer in
source SQL Server version.
support.
Use the Azure SQL migration extension for May require some application or
Azure Data Studio to assess, get user database changes to support
recommendations for right-sized Azure the SQL Server upgrade.
7 Note
It's now possible to lift and shift both your failover cluster instance and availability
group solution to SQL Server on Azure VMs using Azure Migrate.
Migrate
Owing to the ease of setup, the recommended migration approach is to take a native
SQL Server backup locally and then copy the file to Azure. This method supports larger
databases (>1 TB) for all versions of SQL Server starting from 2008 and larger database
backups (>1 TB). Starting with SQL Server 2014, for database smaller than 1 TB that have
good connectivity to Azure, SQL Server backup to URL is the better approach.
When migrating SQL Server databases to an instance of SQL Server on Azure VMs, it's
important to choose an approach that suits when you need to cut over to the target
server as this affects the application downtime window.
The following table details all available methods to migrate your SQL Server database to
SQL Server on Azure VMs:
Azure SQL migration SQL SQL Azure VM This is an easy to use wizard
extension for Azure Server Server storage based extension in Azure Data
Data Studio 2008 2008 limit Studio for migrating SQL Server
database(s) to SQL Server on
Azure virtual machines. Use
compression to minimize backup
size for transfer.
1 TB
Automation & scripting: T-SQL
or maintenance plan
Database Migration SQL SQL Azure VM The DMA assesses SQL Server
Assistant (DMA) Server Server storage on-premises and then seamlessly
2005 2008 SP4 limit upgrades to later versions of SQL
Server or migrates to SQL Server
on Azure VMs, Azure SQL
Database or Azure SQL Managed
Instance.
Shouldn't be used on
FILESTREAM-enabled user
databases.
Detach and attach SQL SQL Azure VM Use this method when you plan
Server Server storage to store these files using Azure
2008 SP4 2014 limit Blob Storage and attach them to
an instance of SQL Server on an
Azure VM, useful with very large
databases or when the time to
backup and restore is too long.
Only) Only)
This provides minimal downtime
during failover and has less
configuration overhead than
setting up an Always On
availability group.
Convert on-premises SQL SQL Azure VM Use when bringing your own SQL
machine to Hyper-V Server Server storage Server license, when migrating a
VHDs, upload to Azure 2005 or 2005 or limit database that you'll run on an
Blob storage, and then greater greater older version of SQL Server, or
deploy a new virtual when migrating system and user
machine using databases together as part of the
uploaded VHD migration of database
dependent on other user
databases and/or system
databases.
Ship hard drive using SQL SQL Azure VM Use the Windows Import/Export
Windows Server Server storage Service when manual copy
Import/Export Service 2005 or 2005 or limit method is too slow, such as with
greater greater very large databases
Tip
For large data transfers with limited to no network options, see Large data
transfers with limited connectivity.
It's now possible to lift and shift both your failover cluster instance and
availability group solution to SQL Server on Azure VMs using Azure Migrate.
Considerations
The following is a list of key points to consider when reviewing migration methods:
For optimum data transfer performance, migrate databases and files onto an
instance of SQL Server on Azure VM using a compressed backup file. For larger
databases, in addition to compression, split the backup file into smaller files for
increased performance during backup and transfer.
If migrating from SQL Server 2014 or higher, consider encrypting the backups to
protect data during network transfer.
To minimize downtime during database migration, use the Azure SQL migration
extension in Azure Data Studio or Always On availability group option.
For limited to no network options, use offline migration methods such as backup
and restore, or disk transfer services available in Azure.
To also change the version of SQL Server on a SQL Server on Azure VM, see
change SQL Server edition.
Business Intelligence
There may be additional considerations when migrating SQL Server Business Intelligence
services outside the scope of database migrations.
Backup and restore the SSISDB from the source SQL Server instance to SQL Server
on Azure VM. This will restore your packages in the SSISDB to the Integration
Services Catalog on your target SQL Server on Azure VM.
Redeploy your SSIS packages on your target SQL Server on Azure VM using one of
the deployment options.
If you have SSIS packages deployed as package deployment model, you can convert
them before migration. See the project conversion tutorial to learn more.
Alternatively, you can also migrate SSRS reports to paginated reports in Power BI. Use
the RDL Migration Tool to help prepare and migrate your reports. Microsoft
developed this tool to help customers migrate Report Definition Language (RDL) reports
from their SSRS servers to Power BI. It's available on GitHub, and it documents an end-
to-end walkthrough of the migration scenario.
Alternatively, you can consider migrating your on-premises Analysis Services tabular
models to Azure Analysis Services or to Power BI Premium by using the new XMLA
read/write endpoints.
Server objects
Depending on the setup in your source SQL Server, there may be additional SQL Server
features that will require manual intervention to migrate them to SQL Server on Azure
VM by generating scripts in Transact-SQL (T-SQL) using SQL Server Management Studio
and then running the scripts on the target SQL Server on Azure VM. Some of the
commonly used features are:
For a complete list of metadata and server objects that you need to move, see Manage
Metadata When Making a Database Available on Another Server.
Supported versions
As you prepare for migrating SQL Server databases to SQL Server on Azure VMs, be sure
to consider the versions of SQL Server that are supported. For a list of current supported
SQL Server versions on Azure VMs, please see SQL Server on Azure VMs.
Migration assets
For additional assistance, see the following resources that were developed for real world
migration projects.
Asset Description
Data This tool provides suggested "best fit" target platforms, cloud readiness, and
workload application/database remediation level for a given workload. It offers simple, one-
assessment select calculation and report generation that helps to accelerate large estate
model and assessments by providing and automated and uniform target platform decision
tool process.
Perfmon A tool that collects Perform data to understand baseline performance that helps
data the migration target recommendation. This tool that uses logman.exe to create
collection the command that will create, start, stop, and delete performance counters set on
automation a remote SQL Server.
using
Logman
Multiple- This whitepaper outlines the steps to set up multiple Azure virtual machines in a
SQL-VM- SQL Server Always On Availability Group configuration.
VNet-ILB
Azure virtual These PowerShell scripts provide a programmatic option to retrieve the list of
machines regions that support Azure virtual machines supporting Ultra SSDs.
supporting
Ultra SSD
per
Region
The Data SQL Engineering team developed these resources. This team's core charter is
to unblock and accelerate complex modernization for data platform migration projects
to Microsoft's Azure data platform.
Next steps
To start migrating your SQL Server databases to SQL Server on Azure VMs, see the
Individual database migration guide.
For a matrix of the Microsoft and third-party services and tools that are available to
assist you with various database and data migration scenarios as well as specialty tasks,
see the article Service and tools for data migration.
To learn more about Azure SQL see:
Deployment options
SQL Server on Azure VMs
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see:
Applies to:
SQL Server on Azure VM
In this guide, you learn how to discover, assess, and migrate your user databases from
SQL Server to an instance of SQL Server on Azure Virtual Machines by tools and
techniques based on your requirements.
For information about extra migration strategies, see the SQL Server VM migration
overview. For other migration guides, see Azure Database Migration Guides.
Prerequisites
Migrating to SQL Server on Azure Virtual Machines requires the following resources:
Pre-migration
Before you begin your migration, you need to discover the topology of your SQL
environment and assess the feasibility of your intended migration.
Discover
Azure Migrate assesses migration suitability of on-premises computers, performs
performance-based sizing, and provides cost estimations for running on-premises. To
plan for the migration, use Azure Migrate to identify existing data sources and details
about the features your SQL Server instances use. This process involves scanning the
network to identify all of your SQL Server instances in your organization with the version
and features in use.
) Important
When you choose a target Azure virtual machine for your SQL Server instance, be
sure to consider the Performance guidelines for SQL Server on Azure Virtual
Machines.
For more discovery tools, see the services and tools available for data migration
scenarios.
Assess
When migrating from SQL Server on-premises to SQL Server on Azure Virtual Machines,
it is unlikely that you'll have any compatibility or feature parity issues if the source and
target SQL Server versions are the same. If you're not upgrading the version of SQL
Server, skip this step and move to the Migrate section.
Before migration, it's still a good practice to run an assessment of your SQL Server
databases to identify migration blockers (if any) and the Azure SQL migration extension
for Azure Data Studio does that before migration.
7 Note
If you are assessing the entire SQL Server data estate at scale on VMware, use
Azure Migrate to get Azure SQL deployment recommendations, target sizing, and
monthly estimates.
) Important
To assess databases using the Azure SQL migration extension, ensure that the
logins used to connect the source SQL Server are members of the sysadmin server
role or have CONTROL SERVER permission.
For a version upgrade, use Data Migration Assistant to assess on-premises SQL Server
instances if you are upgrading to an instance of SQL Server on Azure Virtual Machines
with a higher version to understand the gaps between the source and target versions.
By using captured extended events or SQL Server Profiler traces of your user
databases. You can also use the Database Experimentation Assistant to create a
trace log that can also be used for A/B testing.
By using the Data Access Migration Toolkit (preview) , which provides discovery
and assessment of SQL queries within the code and is used to migrate application
source code from one database platform to another. This tool supports popular file
types like C#, Java, XML, and plain text. For a guide on how to perform a Data
Access Migration Toolkit assessment, see the Use Data Migration Assistant blog
post.
During the assessment of user databases, use Data Migration Assistant to import
captured trace files or Data Access Migration Toolkit files.
Assessments at scale
If you have multiple servers that require Azure readiness assessment, you can automate
the process by using scripts using one of the following options. To learn more about
using scripting see Migrate databases at scale using automation.
For summary reporting across large estates, Data Migration Assistant assessments can
also be consolidated into Azure Migrate.
For upgrade scenario, you might have a series of recommendations to ensure your user
databases perform and function correctly after upgrade. Data Migration Assistant
provides details on the impacted objects and resources for how to resolve each issue.
Make sure to resolve all breaking changes and behavior changes before you start
production upgrade.
For deprecated features, you can choose to run your user databases in their original
compatibility mode if you want to avoid making these changes and speed up migration.
This action will prevent upgrading your database compatibility until the deprecated
items have been resolved.
U Caution
Not all SQL Server versions support all compatibility modes. Check that your target
SQL Server version supports your chosen database compatibility. For example, SQL
Server 2019 doesn't support databases with level 90 compatibility (which is SQL
Server 2005). These databases would require, at least, an upgrade to compatibility
level 100.
Migrate
After you've completed the pre-migration steps, you're ready to migrate the user
databases and components. Migrate your databases by using your preferred migration
method.
migrate using the Azure SQL migration extension for Azure Data Studio with
minimal downtime
backup and restore
detach and attach from a URL
convert to a VM, upload to a URL, and deploy as a new VM
log shipping
ship a hard drive
migrate objects outside user databases
1. Download and install Azure Data Studio and the Azure SQL migration extension.
2. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio.
3. Select databases for assessment and view migration readiness or issues (if any).
Additionally, collect performance data and get right-sized Azure recommendation.
4. Select your Azure account and your target SQL Server on Azure Machine from your
subscription.
5. Select the location of your database backups. Your database backups can either be
located on an on-premises network share or in an Azure Blob Storage container.
6. Create a new Azure Database Migration Service using the wizard in Azure Data
Studio. If you have previously created an Azure Database Migration Service using
Azure Data Studio, you can reuse the same if desired.
7. Optional: If your backups are on an on-premises network share, download and
install self-hosted integration runtime on a machine that can connect to source
SQL Server and the location containing the backup files.
8. Start the database migration and monitor the progress in Azure Data Studio. You
can also monitor the progress under the Azure Database Migration Service
resource in Azure portal.
9. Complete the cutover.
a. Stop all incoming transactions to the source database.
b. Make application configuration changes to point to the target database in SQL
Server on Azure Virtual Machine.
c. Take any tail log backups for the source database in the backup location
specified.
d. Ensure all database backups have the status Restored in the monitoring details
page.
e. Select Complete cutover in the monitoring details page.
7 Note
Log shipping
Log shipping replicates transactional log files from on-premises on to an instance of
SQL Server on an Azure VM. This option provides minimal downtime during failover and
has less configuration overhead than setting up an Always On availability group.
For more information, see Log Shipping Tables and Stored Procedures.
The tempdb Plan to move tempdb onto Azure VM temporary disk (SSD)) for
database best performance. Be sure to pick a VM size that has a sufficient
local SSD to accommodate your tempdb .
Feature Component Migration methods
User Use the Backup and restore methods for migration. Data
databases Migration Assistant doesn't support databases with FileStream.
with
FileStream
Security SQL Server Use Data Migration Assistant to migrate user logins.
and Windows
logins
Server Backup Replace with database backup by using Azure Backup, or write
objects devices backups to Azure Storage (SQL Server 2012 SP1 CU2 +). This
procedure uses the SQL VM resource provider.
Operating Files, file Make a note of any other files or file shares that are used by
system shares your SQL servers and replicate on the Azure Virtual Machines
target.
Post-migration
After you successfully complete the migration stage, you need to complete a series of
post-migration tasks to ensure that everything is functioning as smoothly and efficiently
as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly
consumed the source need to start consuming the target. Accomplishing this task might
require changes to the applications in some cases.
Apply any fixes recommended by Data Migration Assistant to user databases. You need
to script these fixes to ensure consistency and allow for automation.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests: To test the database migration, you need to use SQL
queries. Create validation queries to run against both the source and target
databases. Your validation queries should cover the scope you've defined.
2. Set up a test environment: The test environment should contain a copy of the
source database and the target database. Be sure to isolate the test environment.
3. Run validation tests: Run validation tests against the source and the target, and
then analyze the results.
4. Run performance tests: Run performance tests against the source and target, and
then analyze and compare the results.
Tip
Use the Database Experimentation Assistant to assist with evaluating the target
SQL Server performance.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying
completeness, and addressing potential performance issues with the workload.
For more information about these issues and the steps to mitigate them, see:
Next steps
To check the availability of services that apply to SQL Server, see the Azure global
infrastructure center .
For a matrix of Microsoft and third-party services and tools that are available to assist
you with various database and data migration scenarios and specialty tasks, see Services
and tools for data migration.
Deployment options
SQL Server on Azure Virtual Machines
Azure Total Cost of Ownership (TCO) Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
To assess the application access layer, see Data Access Migration Toolkit (preview) .
For information about how to perform A/B testing for the data access layer, see
Overview of Database Experimentation Assistant.
Migrate an availability group to SQL
Server on Azure VM
Article • 10/27/2022
This article teaches you to migrate your SQL Server Always On availability group to SQL
Server on Azure VMs using the Azure Migrate: Server Migration tool. Using the
migration tool, you will be able to migrate each replica in the availability group to an
Azure VM hosting SQL Server, as well as the cluster metadata, availability group
metadata and other necessary high availability components.
This guide uses the agent-based migration approach of Azure Migrate, which treats any
server or virtual machine as a physical server. When migrating physical machines, Azure
Migrate: Server Migration uses the same replication architecture as the agent-based
disaster recovery in the Azure Site Recovery service, and some components share the
same code base. Some content might link to Site Recovery documentation.
Prerequisites
Before you begin this tutorial, you should complete the following prerequisites:
Prepare Azure
Prepare Azure for migration with the Server Migration tool.
Task Details
Task Details
Create an Your Azure account needs Contributor or Owner permissions to create a new
Azure project.
Migrate
project
Verify Your Azure account needs Contributor or Owner permissions on the Azure
permissions subscription, permissions to register Azure Active Directory (Azure AD) apps, and
for your User Access Administrator permissions on the Azure subscription to create a Key
Azure Vault, to create a VM, and to write to an Azure managed disk.
account
Set up an Setup an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are
Azure created and joined to the Azure VNet that you specify when you set up migration.
virtual
network
1. In the Azure portal, open the subscription, and select Access control (IAM).
2. In Check access, find the relevant account, and select it to view permissions.
3. You should have Contributor or Owner permissions.
If you just created a free Azure account, you're the owner of your
subscription.
If you're not the subscription owner, work with the owner to assign the role.
If you need to assign permissions, follow the steps in Prepare for an Azure user account.
Create a Windows Server 2016 machine to host the replication appliance. Review
the machine requirements.
The replication appliance uses MySQL. Review the options for installing MySQL on
the appliance.
Review the Azure URLs required for the replication appliance to access public and
government clouds.
Review port access requirements for the replication appliance.
7 Note
The replication appliance should be installed on a machine other than the source
machine you are replicating or migrating, and not on any machine that has had the
Azure Migrate discovery and assessment appliance installed before.
1. In the Azure Migrate project > Servers, in Azure Migrate: Server Migration, select
Discover.
2. In Discover machines > Are your machines virtualized?, select Physical or other
(AWS, GCP, Xen, etc.).
3. In Target region, select the Azure region to which you want to migrate the
machines.
5. Select Create resources. This creates an Azure Site Recovery vault in the
background.
If you've already set up migration with Azure Migrate: Server Migration, the
target option can't be configured, since resources were set up previously.
You can't change the target region for this project after selecting this button.
All subsequent migrations are to this region.
8. Copy the appliance setup file and key file to the Windows Server 2016 machine
you created for the appliance.
9. After the installation completes, the Appliance configuration wizard will launch
automatically (You can also launch the wizard manually by using the cspsconfigtool
shortcut that is created on the desktop of the appliance machine). Use the Manage
Accounts tab of the wizard to create a dummy account with the following details:
You will use this dummy account in the Enable Replication stage.
10. After setup completes, and the appliance restarts, in Discover machines, select the
new appliance in Select Configuration Server, and select Finalize registration.
Finalize registration performs a couple of final tasks to prepare the replication
appliance.
2. Navigate to %ProgramData%\ASR\home\svsystems\pushinstallsvc\repository .
3. Find the installer for the machine operating system and version. Review supported
operating systems.
5. Make sure that you have the passphrase that was generated when you deployed
the appliance.
current passphrase.
Don't regenerate the passphrase. This will break connectivity and you will
have to reregister the replication appliance.
In the /Platform parameter, specify VMware for both VMware machines and
physical machines.
6. Connect to the machine and extract the contents of the installer file to a local
folder (such as c:\temp). Run this in an admin command prompt:
MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
cd C:\Temp\Extracted
It may take some time after installation for discovered machines to appear in Azure
Migrate: Server Migration. As VMs are discovered, the Discovered servers count rises.
PowerShell
./Get-ClusterInfo.ps1
Column Description
header
NewIP Specify the IP address in the Azure virtual network (or subnet) for each resource in
the CSV file.
ServicePort Specify the service port to be used by each resource in the CSV file. For the SQL
clustered resource, use the same value for service port as the probe port in the CSV.
For other cluster roles, the default values used are 1433 but you can continue to use
the port numbers that are configured in your current setup.
ConfigFilePath Mandatory Specify the path for the Cluster-Config.csv file that
you have filled out in the previous step.
ResourceGroupName Mandatory Specify the name of the resource group in which the
load balancer is to be created.
VNetName Mandatory Specify the name of the Azure virtual network that the
load balancer will be associated to.
SubnetName Mandatory Specify the name of the subnet in the Azure virtual
network that the load balancer will be associated to.
VNetResourceGroupName Mandatory Specify the name of the resource group for the Azure
virtual network that the load balancer will be
associated to.
Location Mandatory Specify the location in which the load balancer should
be created.
PowerShell
./Create-ClusterLoadBalancer.ps1 -ConfigFilePath ./clsuterinfo.csv -
ResourceGroupName $resoucegroupname -VNetName $vnetname -subnetName
$subnetname -VnetResourceGroupName $vnetresourcegroupname -Location "eastus"
-LoadBalancerName $loadbalancername
Replicate machines
Now, select machines for migration. You can replicate up to 10 machines together. If
you need to replicate more, then replicate them simultaneously in batches of 10.
1. In the Azure Migrate project > Servers, Azure Migrate: Server Migration, select
Replicate.
2. In Replicate, > Source settings > Are your machines virtualized?, select Physical
or other (AWS, GCP, Xen, etc.).
3. In On-premises appliance, select the name of the Azure Migrate appliance that
you set up.
5. In Guest credentials, select the dummy account created previously during the
replication installer setup previously in this article. Then select Next: Virtual
machines.
7. Check each VM you want to migrate. Then select Next: Target settings.
8. In Target settings, select the subscription, and target region to which you'll
migrate, and specify the resource group in which the Azure VMs will reside after
migration.
9. In Virtual Network, select the Azure VNet/subnet to which the Azure VMs will be
joined after migration.
7 Note
To replicate VMs with CMK, you'll need to create a disk encryption set under
the target Resource Group. A disk encryption set object maps Managed Disks
to a Key Vault that contains the CMK to use for SSE.
Select No if you don't want to apply Azure Hybrid Benefit. Then select Next.
Select Yes if you have Windows Server machines that are covered with active
Software Assurance or Windows Server subscriptions, and you want to apply
the benefit to the machines you're migrating. Then select Next.
13. In Compute, review the VM name, size, OS disk type, and availability configuration
(if selected in the previous step). VMs must conform with Azure requirements.
14. In Disks, specify whether the VM disks should be replicated to Azure, and select
the disk type (standard SSD/HDD or premium managed disks) in Azure. Then
select Next.
15. In Review and start replication, review the settings, and select Replicate to start
the initial replication for the servers.
7 Note
You can update replication settings any time before replication starts, Manage >
Replicating machines. Settings can't be changed after replication starts.
You can monitor replication status by selecting on Replicating servers in Azure Migrate:
Server Migration.
Migrate VMs
After machines are replicated, they are ready for migration. To migrate your servers,
follow these steps:
1. In the Azure Migrate project > Servers > Azure Migrate: Server Migration, select
Replicating servers.
2. To ensure the migrated server is synchronized with the source server, stop the SQL
Server service on every replica in the availability group, starting with secondary
replicas (in SQL Server Configuration Manager > Services) while ensuring the
disks hosting SQL data are online.
3. In Replicating machines > select server name > Overview, ensure that the last
synchronized timestamp is after you have stopped the SQL Server service on the
servers to be migrated before you move onto the next step. This should only take
a few minutes.
5. In Migrate > Shut down virtual machines and perform a planned migration with
no data loss, select No > OK.
7 Note
For physical server migration, shut down of source machine is not supported
automatically. The recommendation is to bring the application down as part
of the migration window (don't let the applications accept any connections)
and then initiate the migration (the server needs to be kept running, so
remaining changes can be synchronized) before the migration is completed.
6. A migration job starts for the VM. Track the job in Azure notifications.
7. After the job finishes, you can view and manage the VM from the Virtual Machines
page.
Reconfigure cluster
After your VMs have migrated, reconfigure the cluster. Follow these steps:
2. Add the migrated machines to the backend pool of the load balancer. Navigate to
Load Balancer > Backend pools.
PowerShell
Next steps
Investigate the cloud migration journey in the Azure Cloud Adoption Framework.
Migrate failover cluster instance to SQL
Server on Azure VMs
Article • 03/27/2023
This article teaches you to migrate your Always On failover cluster instance (FCI) to SQL
Server on Azure VMs using the Azure Migrate: Server Migration tool. Using the
migration tool, you will be able to migrate each node in the failover cluster instance to
an Azure VM hosting SQL Server, as well as the cluster and FCI metadata.
This guide uses the agent-based migration approach of Azure Migrate, which treats any
server or virtual machine as a physical server. When migrating physical machines, Azure
Migrate: Server Migration uses the same replication architecture as the agent-based
disaster recovery in the Azure Site Recovery service, and some components share the
same code base. Some content might link to Site Recovery documentation.
Prerequisites
Before you begin this tutorial, you should:
Prepare Azure
Prepare Azure for migration with Server Migration.
Task Details
Create an Your Azure account needs Contributor or Owner permissions to create a new
Azure project.
Migrate
project
Task Details
Verify Your Azure account needs Contributor or Owner permissions on the Azure
permissions subscription, permissions to register Azure Active Directory (Azure AD) apps, and
for your User Access Administrator permissions on the Azure subscription to create a Key
Azure Vault, to create a VM, and to write to an Azure managed disk.
account
Set up an Setup an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are
Azure created and joined to the Azure VNet that you specify when you set up migration.
virtual
network
1. In the Azure portal, open the subscription, and select Access control (IAM).
2. In Check access, find the relevant account, and select it to view permissions.
3. You should have Contributor or Owner permissions.
If you just created a free Azure account, you're the owner of your
subscription.
If you're not the subscription owner, work with the owner to assign the role.
If you need to assign permissions, follow the steps in Prepare for an Azure user account.
Create a Windows Server 2016 machine to host the replication appliance. Review
the machine requirements.
The replication appliance uses MySQL. Review the options for installing MySQL on
the appliance.
Review the Azure URLs required for the replication appliance to access public and
government clouds.
Review port access requirements for the replication appliance.
7 Note
The replication appliance should be installed on a machine other than the source
machine you are replicating or migrating, and not on any machine that has had the
Azure Migrate discovery and assessment appliance installed to before.
1. In the Azure Migrate project > Servers, in Azure Migrate: Server Migration, select
Discover.
2. In Discover machines > Are your machines virtualized?, select Physical or other
(AWS, GCP, Xen, etc.).
3. In Target region, select the Azure region to which you want to migrate the
machines.
5. Select Create resources. This creates an Azure Site Recovery vault in the
background.
If you've already set up migration with Azure Migrate Server Migration, the
target option can't be configured, since resources were set up previously.
You can't change the target region for this project after selecting this button.
All subsequent migrations are to this region.
8. Copy the appliance setup file and key file to the Windows Server 2016 machine
you created for the appliance.
9. After the installation completes, the Appliance configuration wizard will launch
automatically (You can also launch the wizard manually by using the cspsconfigtool
shortcut that is created on the desktop of the appliance machine). Use the Manage
Accounts tab of the wizard to create a dummy account with the following details:
You will use this dummy account in the Enable Replication stage.
10. After setup completes, and the appliance restarts, in Discover machines, select the
new appliance in Select Configuration Server, and select Finalize registration.
Finalize registration performs a couple of final tasks to prepare the replication
appliance.
2. Navigate to %ProgramData%\ASR\home\svsystems\pushinstallsvc\repository .
3. Find the installer for the machine operating system and version. Review supported
operating systems.
5. Make sure that you have the passphrase that was generated when you deployed
the appliance.
current passphrase.
Don't regenerate the passphrase. This will break connectivity and you will
have to reregister the replication appliance.
In the /Platform parameter, specify VMware for both VMware machines and
physical machines.
6. Connect to the machine and extract the contents of the installer file to a local
folder (such as c:\temp). Run this in an admin command prompt:
MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
cd C:\Temp\Extracted
It may take some time after installation for discovered machines to appear in Azure
Migrate: Server Migration. As VMs are discovered, the Discovered servers count rises.
U Caution
Maintain disk ownership throughout the replication process until the final
cutover. If there is a change in disk ownership, there is a chance that the
volumes could be corrupted and replication would need to be to retriggered.
Set the preferred owner for each disk to avoid transfer of ownership during
the replication process.
Avoid patching activities and system reboots during the replication process to
avoid transfer of disk ownership.
1. Identify disk ownership: Sign in to one of the cluster nodes and open Failover
Cluster Manager. Identify the owner node for the disks to determine the disks that
need to be migrated with each server.
PowerShell
./Get-ClusterInfo.ps1
Column Description
header
NewIP Specify the IP address in the Azure virtual network (or subnet) for each
resource in the CSV file.
ServicePort Specify the service port to be used by each resource in the CSV file. For SQL
cluster resource, use the same value for service port as the probe port in the
CSV. For other cluster roles, the default values used are 1433 but you can
continue to use the port numbers that are configured in your current setup.
VNetResourceGroupName Mandatory Specify the name of the resource group for the
Azure virtual network that the load balancer will
be associated to.
PowerShell
Replicate machines
Now, select machines for migration. You can replicate up to 10 machines together. If
you need to replicate more, then replicate them simultaneously in batches of 10.
1. In the Azure Migrate project > Servers, Azure Migrate: Server Migration, select
Replicate.
2. In Replicate, > Source settings > Are your machines virtualized?, select Physical
or other (AWS, GCP, Xen, etc.).
3. In On-premises appliance, select the name of the Azure Migrate appliance that
you set up.
5. In Guest credentials, select the dummy account created previously during the
replication installer setup. Then select Next: Virtual machines.
6. In Virtual Machines, in Import migration settings from an assessment?, leave the
default setting No, I'll specify the migration settings manually.
7. Check each VM you want to migrate. Then select Next: Target settings.
8. In Target settings, select the subscription, and target region to which you'll
migrate, and specify the resource group in which the Azure VMs will reside after
migration.
9. In Virtual Network, select the Azure VNet/subnet to which the Azure VMs will be
joined after migration.
7 Note
To replicate VMs with CMK, you'll need to create a disk encryption set under
the target Resource Group. A disk encryption set object maps Managed Disks
to a Key Vault that contains the CMK to use for SSE.
Select No if you don't want to apply Azure Hybrid Benefit. Then select Next.
Select Yes if you have Windows Server machines that are covered with active
Software Assurance or Windows Server subscriptions, and you want to apply
the benefit to the machines you're migrating. Then select Next.
13. In Compute, review the VM name, size, OS disk type, and availability configuration
(if selected in the previous step). VMs must conform with Azure requirements.
Use the list that you had made earlier to select the disks to be replicated with
each server. Exclude other disks from replication.
15. In Review and start replication, review the settings, and select Replicate to start
the initial replication for the servers.
7 Note
You can update replication settings any time before replication starts, Manage >
Replicating machines. Settings can't be changed after replication starts.
You can monitor replication status by selecting on Replicating servers in Azure Migrate:
Server Migration.
Migrate VMs
After machines are replicated, they are ready for migration. To migrate your servers,
follow these steps:
1. In the Azure Migrate project > Servers > Azure Migrate: Server Migration, select
Replicating servers.
2. To ensure that the migrated server is synchronized with the source server, stop the
SQL Server resource (in Failover Cluster Manager > Roles > Other resources)
while ensuring that the cluster disks are online.
3. In Replicating machines > select server name > Overview, ensure that the last
synchronized timestamp is after you have stopped SQL Server resource on the
servers to be migrated before you move onto the next step. This should only take
a few of minutes.
5. In Migrate > Shut down virtual machines and perform a planned migration with
no data loss, select No > OK.
7 Note
For Physical Server Migration, shut down of source machine is not supported
automatically. The recommendation is to bring the application down as part
of the migration window (don't let the applications accept any connections)
and then initiate the migration (the server needs to be kept running, so
remaining changes can be synchronized) before the migration is completed.
6. A migration job starts for the VM. Track the job in Azure notifications.
7. After the job finishes, you can view and manage the VM from the Virtual Machines
page.
Reconfigure cluster
After your VMs have migrated, reconfigure the cluster. Follow these steps:
2. Add the migrated machines to the backend pool of the load balancer. Navigate to
Load Balancer > Backend pools.
4. Reconfigure the migrated disks of the servers as shared disks by running the
Create-SharedDisks.ps1 script. The script is interactive and will prompt for a list of
machines and then show available disks to be extracted (only data disks). You will
be prompted once to select which machines contain the drives to be turned into
shared disks. Once selected, you will be prompted again, once per machine, to pick
the specific disks.
DiskNamePrefix Optional Specify the prefix that you'd want to add to the names
of your shared disks.
PowerShell
5. Attach the shared disks to the migrated servers by running the Attach-
SharedDisks.ps1 script.
Parameter Type Description
StartingLunNumber Optional Specify the starting LUN number that is available for
the shared disks to be attached to. By default, the
script tries to attach shared disks to LUN starting 0.
PowerShell
PowerShell
Next steps
Investigate the cloud migration journey in the Azure Cloud Adoption Framework.
Prerequisites: Migrate to SQL Server VM
using distributed AG
Article • 08/30/2022
Use a distributed availability group (AG) to migrate either a standalone instance of SQL
Server or an Always On availability group to SQL Server on Azure Virtual Machines
(VMs).
This article describes the prerequisites to prepare your source and target environments
to migrate your SQL Server instance or availability group to SQL Server VMs using a
distributed ag.
For a standalone instance migration, the minimum supported version is SQL Server
2017. For an availability group migration, SQL Server 2016 or later is supported.
Your SQL Server edition should be enterprise.
You must enable the Always On feature.
The databases you intend to migrate have been backed up in full mode.
If you already have an availability group, it must be in a healthy state. If you create
an availability group as part of this process, it must be in a healthy state before you
start the migration.
Ports used by the SQL Server instance (1433 by default) and the database
mirroring endpoint (5022 by default) must be open in the firewall. To migrate
databases in an availability group, make sure the port used by the listener is also
open in the firewall.
Connectivity
The source and target SQL Server instance must have an established network
connection.
If your source SQL Server instance is located on an Azure virtual network that is different
than the target SQL Server VM, then configure virtual network peering.
Authentication
To simplify authentication between your source and target SQL Server instance, join
both servers to the same domain, preferably with the domain being on the source side
and apply domain-based authentication. Since this is the recommended approach, the
steps in this tutorial series assume both source and target SQL Server instance are part
of the same domain.
If the source and target servers are part of different domains, configure federation
between the two domains, or configure a domain-independent availability group.
Next steps
Once you have configured both source and target environment to meet the
prerequisites, you're ready to migrate either your standalone instance of SQL Server or
an Always On availability group to your target SQL Server VM(s).
Use distributed AG to migrate databases
from a standalone instance
Article • 08/30/2022
Use a distributed availability group (AG) to migrate a database (or multiple databases)
from a standalone instance of SQL Server to SQL Server on Azure Virtual Machines
(VMs).
Once you've validated your source SQL Server instance meets the prerequisites, follow
the steps in this article to create an availability group on your standalone SQL Server
instance and migrate your database (or group of databases) to your SQL Server VM in
Azure.
This article is intended for databases on a standalone instance of SQL Server. This
solution does not require a Windows Server Failover Cluster (WSFC) or an availability
group listener. It's also possible to migrate databases in an availability group.
Initial setup
The first step is to create your SQL Server VM in Azure. You can do so by using the Azure
portal, Azure PowerShell, or an ARM template.
For simplicity, join your target SQL Server VM to the same domain as your source SQL
Server. Otherwise, join your target SQL Server VM to a domain that's federated with the
domain of your source SQL Server.
To use automatic seeding to create your distributed availability group (DAG), the
instance name for the global primary (source) of the DAG must match the instance
name of the forwarder (target) of the DAG. If there is an instance name mismatch
between the global primary and forwarder, then you must use manual seeding to create
the DAG, and manually add any additional database files in the future.
Create endpoints
Use Transact-SQL (T-SQL) to create endpoints on both your source (OnPremNode) and
target (SQLVM) SQL Server instances.
To create your endpoints, run this T-SQL script on both source and target servers:
SQL
STATE=STARTED
FOR DATA_MIRRORING (
ROLE = ALL,
GO
Domain accounts automatically have access to endpoints, but service accounts may not
automatically be part of the sysadmin group and may not have connect permission. To
manually grant the SQL Server service account connect permission to the endpoint, run
the following T-SQL script on both servers:
SQL
GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [<your account>]
Create source AG
Since a distributed availability group is a special availability group that spans across two
individual availability groups, you first need to create an availability group on the source
SQL Server instance. If you already have an availability group that you would like to
maintain in Azure, then migrate your availability group instead.
SQL
DB_FAILOVER = OFF,
DTC_SUPPORT = NONE,
CLUSTER_TYPE=NONE )
REPLICA ON N'OnPremNode'
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
GO
Create target AG
You also need to create an availability group on the target SQL Server VM as well.
SQL
DB_FAILOVER = OFF,
DTC_SUPPORT = NONE,
CLUSTER_TYPE=NONE,
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT = 0)
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
GO
Create distributed AG
After you have your source (OnPremAG) and target (AzureAG) availability groups
configured, create your distributed availability group to span both individual availability
groups.
SQL
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON
'OnPremAG' WITH
LISTENER_URL = 'tcp://OnPremNode.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
),
'AzureAG' WITH
LISTENER_URL = 'tcp://SQLVM.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO
7 Note
The seeding mode is set to AUTOMATIC as the version of SQL Server on the target
and source is the same. If your SQL Server target is a higher version, or if your
global primary and forwarder have different instance names, then create the
distributed ag, and join the secondary AG to the distributed ag with
SEEDING_MODE set to MANUAL . Then manually restore your databases from the
source to the target SQL Server instance. Review upgrading versions during
migration to learn more.
After your distributed AG is created, join the target AG (AzureAG) on the target instance
(SQLVM) to the distributed AG (DAG).
To join the target AG to the distributed AG, run this script on the target:
SQL
JOIN
AVAILABILITY GROUP ON
'OnPremAG' WITH
(LISTENER_URL = 'tcp://OnPremNode.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
),
'AzureAG' WITH
(LISTENER_URL = 'tcp://SQLVM.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO
If you need to cancel, pause, or delay synchronization between the source and target
availability groups (such as, for example, performance issues), run this script on the
source global primary instance (OnPremNode):
SQL
MODIFY
AVAILABILITY GROUP ON
'AzureAG' WITH
( SEEDING_MODE = MANUAL );
Once you've validated your source SQL Server instances meet the prerequisites, follow
the steps in this article to create a distributed availability between your existing
availability group, and your target availability group on your SQL Server on Azure VMs.
This article is intended for databases participating in an availability group, and requires a
Windows Server Failover Cluster (WSFC) and an availability group listener. It's also
possible to migrate databases from a standalone SQL Server instance.
Initial setup
The first step is to create your SQL Server VMs in Azure. You can do so by using the
Azure portal, Azure PowerShell, or an ARM template.
Be sure to configure your SQL Server VMs according to the prerequisites. Choose
between a single subnet deployment, which relies on an Azure Load Balancer or
distributed network name to route traffic to your availability group listener, or a multi-
subnet deployment which does not have such a requirement. The multi-subnet
deployment is recommended. To learn more, see connectivity.
For simplicity, join your target SQL Server VMs to the same domain as your source SQL
Server instances. Otherwise, join your target SQL Server VM to a domain that's federated
with the domain of your source SQL Server instances.
To use automatic seeding to create your distributed availability group (DAG), the
instance name for the global primary (source) of the DAG must match the instance
name of the forwarder (target) of the DAG. If there is an instance name mismatch
between the global primary and forwarder, then you must use manual seeding to create
the DAG, and manually add any additional database files in the future.
Create endpoints
Use Transact-SQL (T-SQL) to create endpoints on both your two source instances
(OnPremNode1, OnPremNode2) and target SQL Server instances (SQLVM1, SQLVM2).
If you already have an availability group configured on the source instances, only run
this script on the two target instances.
To create your endpoints, run this T-SQL script on both source and target servers:
SQL
STATE=STARTED
FOR DATA_MIRRORING (
ROLE = ALL,
GO
Domain accounts automatically have access to endpoints, but service accounts may not
automatically be part of the sysadmin group and may not have connect permission. To
manually grant the SQL Server service account connect permission to the endpoint, run
the following T-SQL script on both servers:
SQL
Create source AG
Since a distributed availability group is a special availability group that spans across two
individual availability groups, you first need to create an availability group on the two
source SQL Server instances.
If you already have an availability group on your source instances, skip this section.
Use Transact-SQL (T-SQL) to create an availability group (OnPremAG) between your two
source instances (OnPremNode1, OnPremNode2) for the example Adventureworks
database.
To create the availability group on the source instances, run this script on the source
primary replica (OnPremNode1):
SQL
DB_FAILOVER = OFF,
DTC_SUPPORT = NONE )
REPLICA ON
FAILOVER_MODE = AUTOMATIC,
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
SEEDING_MODE = AUTOMATIC,
SECONDARY_ROLE(ALLOW_CONNECTIONS = NO)),
FAILOVER_MODE = AUTOMATIC,
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
SEEDING_MODE = AUTOMATIC,
SECONDARY_ROLE(ALLOW_CONNECTIONS = NO));
To join the availability group, run this script on the source secondary replica:
SQL
GO
GO
Finally, create the listener for your global forwarder availability group (OnPremAG).
To create the listener, run this script on the source primary replica:
SQL
USE [master]
GO
, PORT=60173);
GO
Create target AG
You also need to create an availability group on the target SQL Server VMs as well.
If you already have an availability group configured between your SQL Server instances
in Azure, skip this section.
Use Transact-SQL (T-SQL) to create an availability group (AzureAG) on the target SQL
Server instances (SQLVM1 and SQLVM2).
To create the availability group on the target, run this script on the target primary
replica:
SQL
FOR
FAILOVER_MODE = MANUAL,
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
BACKUP_PRIORITY = 50,
SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),
SEEDING_MODE = AUTOMATIC),
FAILOVER_MODE = MANUAL,
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
BACKUP_PRIORITY = 50,
SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),
SEEDING_MODE = AUTOMATIC);
GO
Next, join the target secondary replica (SQLVM2) to the availability group (AzureAG).
SQL
GO
GO
Finally, create a listener (AzureAG_LST) for your target availability group (AzureAG). If
you deployed your SQL Server VMs to multiple subnets, create your listener using
Transact-SQL. If you deployed your SQL Server VMs to a single subnet, configure either
an Azure Load Balancer, or a distributed network name for your listener.
To create your listener, run this script on the primary replica of the availability group in
Azure.
SQL
WITH IP
GO
Create distributed AG
After you have your source (OnPremAG) and target (AzureAG) availability groups
configured, create your distributed availability group to span both individual availability
groups.
Use Transact-SQL on the source SQL Server global primary (OnPremNode1) and AG
(OnPremAG) to create the distributed availability group (DAG).
To create the distributed AG on the source, run this script on the source global primary:
SQL
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON
'OnPremAG' WITH
LISTENER_URL = 'tcp://OnPremAG_LST.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
),
'AzureAG' WITH
LISTENER_URL = 'tcp://AzureAG_LST.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO
7 Note
The seeding mode is set to AUTOMATIC as the version of SQL Server on the target
and source is the same. If your SQL Server target is a higher version, or if your
global primary and forwarder have different instance names, then create the
distributed ag, and join the secondary AG to the distributed ag with
SEEDING_MODE set to MANUAL . Then manually restore your databases from the
source to the target SQL Server instance. Review upgrading versions during
migration to learn more.
After your distributed AG is created, join the target AG (AzureAG) on the target
forwarder instance (SQLVM1) to the distributed AG (DAG).
To join the target AG to the distributed AG, run this script on the target forwarder:
SQL
JOIN
AVAILABILITY GROUP ON
'OnPremAG' WITH
LISTENER_URL = 'tcp://OnPremAG_LST.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
),
'AzureAG' WITH
LISTENER_URL = 'tcp://AzureAG_LST.contoso.com:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO
If you need to cancel, pause, or delay synchronization between the source and target
availability groups (such as, for example, performance issues), run this script on the
source global primary instance (OnPremNode1):
SQL
MODIFY
AVAILABILITY GROUP ON
'AzureAG' WITH
( SEEDING_MODE = MANUAL );
Next steps
After your distributed availability group is created, you are ready to complete the
migration.
Complete migration using a distributed
AG
Article • 09/29/2022
Use a distributed availability group (AG) to migrate your databases from SQL Server to
SQL Server on Azure Virtual Machines (VMs).
This article assumes you've already configured your distributed AG for either your
standalone databases or your availability group databases and now you're ready to
finalize the migration to SQL Server on Azure VMs.
Monitor migration
Use Transact-SQL (T-SQL) to monitor the progress of your migration.
Run the following script on the global primary and the forwarder and validate that the
state for synchronization_state_desc for the primary availability group (OnPremAG)
and the secondary availability group (AzureAG) is SYNCHRONIZED . Confirm that the
synchronization_state_desc for the distributed AG (DAG) is synchronizing and the
last_hardened_lsn is the same per database on both the global primary and the
forwarder.
If not, rerun the query on both sides every 5 seconds or so until it is the case.
SQL
SELECT ag.name
, drs.database_id
, db_name(drs.database_id) as database_name
, drs.group_id
, drs.replica_id
, drs.synchronization_state_desc
, drs.last_hardened_lsn
FROM sys.dm_hadr_database_replica_states drs
Complete migration
Once you've validated the states of the availability group and the distributed AG, you're
ready to complete the migration. This consists of failing over the distributed AG to the
forwarder (the target SQL Server in Azure), and then cutting over the application to the
new primary on the Azure side.
After the failover, update the connection string of your application to connect to the
new primary replica in Azure. At this point, you can choose to maintain the distributed
availability group, or use DROP AVAILABILITY GROUP [DAG] on both the source and target
SQL Server instances to drop it.
If your domain controller is on the source side, validate that your target SQL Server VMs
in Azure have joined the domain before abandoning the source SQL Server instances.
Don't delete the domain controller on the source side until you create a domain on the
source side in Azure and add your SQL Server VMs to this new domain.
Next steps
For a tutorial showing you how to migrate a database to SQL Server on Azure Virtual
Machines using the T-SQL RESTORE command, see Migration guide: SQL Server to SQL
Server on Azure Virtual Machines.
For information about SQL Server on Azure Virtual Machines, see the Overview.
For information about connecting apps to SQL Server on Azure Virtual Machines,
see Connect applications.
Azure SQL glossary of terms
Article • 02/13/2023
Applies to:
Azure SQL Database
Azure SQL Managed Instance
SQL Server
on Azure VM
Azure service Azure SQL Azure SQL Database is a fully managed platform as a service
Database (PaaS) database that handles most database management
functions such as upgrading, patching, backups, and
monitoring without user involvement.
Database The database engine used in Azure SQL Database is the most
engine recent stable version of the same database engine shipped as
the Microsoft SQL Server product. Some database engine
features are exclusive to Azure SQL Database or are available
before they are shipped with SQL Server. The database engine
is configured and optimized for use in the cloud. In addition to
core database functionality, Azure SQL Database provides
cloud-native capabilities such as Hyperscale and serverless
compute.
Server entity Logical server A logical server is a construct that acts as a central
administrative point for a collection of databases in Azure SQL
Database and Azure Synapse Analytics. All databases managed
by a server are created in the same region as the server. A
server is a purely logical concept: a logical server is not a
machine running an instance of the database engine. There is
no instance-level access or instance features for a server.
Elastic pool Elastic pools are a simple, cost-effective solution for managing
and scaling multiple databases that have varying and
unpredictable usage demands. The databases in an elastic pool
are on a single logical server. The databases share a set
allocation of resources at a set price.
Context Term Definition
Single database If you deploy single databases, each database is isolated, using
a dedicated database engine. Each has its own service tier
within your selected purchasing model and a compute size
defining the resources allocated to the database engine.
Service tier The service tier defines the storage architecture, storage and
I/O limits, and business continuity options. Options for service
tiers vary by purchasing model.
DTU-based Basic, standard, and premium service tiers are available in the
service tiers DTU-based purchasing model.
Hardware Available The vCore-based purchasing model allows you to select the
configuration hardware appropriate hardware configuration for your workload.
configurations Hardware configuration options include standard series (Gen5),
M-series, Fsv2-series, and DC-series.
vCore-based Configure the compute size for your database or elastic pool
sizing options by selecting the appropriate service tier, compute tier, and
hardware for your workload. When using an elastic pool,
configure the reserved vCores for the pool, and optionally
configure per-database settings. For sizing options and
resource limits in the vCore-based purchasing model, see
vCore single databases, and vCore elastic pools.
DTU-based Configure the compute size for your database or elastic pool
sizing options by selecting the appropriate service tier and selecting the
maximum data size and number of DTUs. When using an elastic
pool, configure the reserved eDTUs for the pool, and optionally
configure per-database settings. For sizing options and
resource limits in the DTU-based purchasing model, see DTU
single databases and DTU elastic pools.
Azure service Azure SQL Azure SQL Managed Instance is a fully managed platform as a
Managed service (PaaS) deployment option of Azure SQL. It gives you an
Instance instance of SQL Server, including the SQL Server Agent, but
removes much of the overhead of managing a virtual machine.
Most of the features available in SQL Server are available in SQL
Managed Instance. Compare the features in Azure SQL Database
and Azure SQL Managed Instance.
Context Term More information
Database The database engine used in Azure SQL Managed Instance has
engine near 100% compatibility with the latest SQL Server (Enterprise
Edition) database engine. Some database engine features are
exclusive to managed instances or are available in managed
instances before they are shipped with SQL Server. Managed
instances provide cloud-native capabilities and integrations such
as a native virtual network (VNet) implementation, automatic
patching and version updates, automated backups, and high
availability.
Server entity Managed Each managed instance is itself an instance of SQL Server.
instance Databases created on a managed instance are colocated with
one another, and you may run cross-database queries. You can
connect to the managed instance and use instance-level features
such as linked servers and the SQL Server Agent.
Instance pool Instance pools enable you to deploy multiple managed instances
(preview) to the same virtual machine. Instance pools enable you to
migrate smaller and less compute-intensive workloads to the
cloud without consolidating them in a single larger managed
instance.
Service tier vCore-based SQL Managed Instance offers two service tiers. Both service tiers
service tiers guarantee 99.99% availability and enable you to independently
select storage size and compute capacity. Select either the
General Purpose or Business Critical service tier for a managed
instance based upon your performance and latency
requirements.
Compute vCore-based Compute size (service objective) is the maximum amount of CPU,
size sizing options memory, and storage resources available for a single managed
instance or instance pool. Configure the compute size for your
managed instance by selecting the appropriate service tier and
hardware for your workload. Learn about resource limits for
managed instances.
Azure service SQL Server on SQL Server on Azure VMs enables you to use full versions of SQL
Azure Virtual Server in the cloud without having to manage any on-premises
Machines hardware. SQL Server VMs simplify licensing costs when you pay
(VMs) as you go. You have both SQL Server and OS access with some
automated manageability features for SQL Server VMs, such as
the SQL IaaS Agent extension.
Server entity Virtual Azure VMs run in many geographic regions around the world.
machine or They also offer various machine sizes. The virtual machine image
VM gallery allows you to create a SQL Server VM with the right
version, edition, and operating system.
Image Windows VMs You can choose to deploy SQL Server VMs with Windows-based
or Linux VMs images or Linux-based images. Image selection specifies both
the OS version and SQL Server edition for your SQL Server VM.
Pricing Pricing for SQL Server on Azure VMs is based on SQL Server
licensing, operating system (OS), and virtual machine cost. You
can reduce costs by optimizing your VM size and shutting down
your VM when possible.
SQL Server Choose the appropriate free or paid SQL Server edition for your
licensing cost usage and requirements. For paid editions, you may pay per
usage (also known as pay as you go) or use Azure Hybrid
Benefit.
OS and virtual OS and virtual machine cost is based upon factors including your
machine cost choice of image, VM size, and storage configuration.
Context Term More information
Security You can enable Microsoft Defender for SQL, integrate Azure Key
considerations Vault, control access, and secure connections to your SQL Server
VM. Learn security guidelines to establish secure access to SQL
Server VMs.
SQL IaaS The SQL IaaS Agent extension (SqlIaasExtension) runs on SQL
Agent Server VMs to automate management and administration tasks.
extension There's no extra cost associated with the extension.
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW) SQL Endpoint in
Microsoft Fabric Warehouse in Microsoft Fabric
This article gives the basics about how to find and use the Microsoft Transact-SQL (T-
SQL) reference articles. T-SQL is central to using Microsoft SQL products and services. All
tools and applications that communicate with a SQL Server database do so by sending
T-SQL commands.
For example, this article applies to all versions, and has the following label.
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW)
Another example, the following label indicates an article that applies only to Azure
Synapse Analytics and Parallel Data Warehouse.
In some cases, the article is used by a product or service, but all of the arguments aren't
supported. In this case, other Applies to sections are inserted into the appropriate
argument descriptions in the body of the article.
Next steps
Tutorial: Writing Transact-SQL Statements
Transact-SQL Syntax Conventions (Transact-SQL)
SQL
Reference
Commands
az sql Manage Azure SQL Databases and Data Warehouses.
Az.Sql
Reference
This topic displays help topics for the Azure SQL Database Cmdlets.
SQL
Add-AzSqlDatabaseToFailoverGroup Adds one or more databases to an Azure SQL
Database Failover Group.
) Important
Classes
AdministratorType Defines values for AdministratorType.
Job A job.
ManagedInstancePrivateEndpointProperty
ManagedInstancePrivateLink A private link resource
ManagedInstancePrivateLinkServiceConnectionStateProperty
NetworkIsolationSettings Contains the ARM resources for which to create private endpoint
connection.
PrivateEndpointProperty
PrivateLinkServiceConnectionStateProperty
QueryStatistics
TopQueries
UpsertManagedServerOperationParameters
UpsertManagedServerOperationStep
Enums
AdvancedThreatProtection Defines values for AdvancedThreatProtectionState.
State
This package contains the classes for SqlManagementClient. The Azure SQL Database
management API provides a RESTful set of web services that interact with Azure SQL
Database services to manage your databases. The API enables you to create, retrieve,
update, and delete databases.
Classes
AutomaticTuningOptions Automatic tuning properties for individual advisors.
Interfaces
CheckNameAvailabilityResult The result of checking for the SQL server name availability.
SqlChildrenOperations<T> Base class for Azure SQL Server child resource operations.
SqlChildrenOperations.Sql Base interface for Azure SQL Server child resource actions.
ChildrenActionsDefinition<T>
SqlDatabase.DefinitionStages. The first stage of the SQL Server Firewall rule definition.
Blank<ParentT>
SqlDatabase.DefinitionStages. The SQL database interface with all starting options for
WithAllDifferent definition.
Options<ParentT>
SqlDatabase.DefinitionStages. The final stage of the SQL Database definition after the SQL
WithAttachAfterElasticPool Elastic Pool definition.
Options<ParentT>
SqlDatabase.DefinitionStages. The final stage of the SQL Database definition with all the other
WithAttachAll options.
Options<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the collation for database.
WithCollation<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the collation for database.
WithCollationAfterElasticPool
Options<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the create mode for
WithCreateMode<ParentT> database.
SqlDatabase.DefinitionStages. The SQL Database definition to set the edition for database.
WithEdition<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the edition default for
WithEditionDefaults<ParentT> database.
SqlDatabase.DefinitionStages. The SQL Database definition to set the collation for database.
WithEditionDefaults.With
Collation<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the elastic pool for database.
WithElasticPool
Name<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the Max Size in Bytes for
WithMaxSizeBytes<ParentT> database.
SqlDatabase.DefinitionStages. The SQL Database definition to set the Max Size in Bytes for
WithMaxSizeBytesAfterElastic database.
PoolOptions<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set a restore point as the source
WithRestorePoint database.
Database<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set a restore point as the source
WithRestorePointDatabase database within an elastic pool.
AfterElasticPool<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the service level objective.
WithService
Objective<ParentT>
SqlDatabase.DefinitionStages. The SQL Database definition to set the source database id for
WithSourceDatabase database.
Id<ParentT>
SqlDatabase.UpdateStages. The SQL Database definition to set the elastic pool for database.
WithElasticPoolName
SqlDatabase.UpdateStages. The SQL Database definition to set the Max Size in Bytes for
WithMaxSizeBytes database.
SqlDatabase.UpdateStages. The SQL Database definition to set the service level objective.
WithServiceObjective
SqlDatabaseAutomaticTuning. The update stage setting the database automatic tuning desired
UpdateStages.WithAutomatic state.
TuningMode
SqlDatabaseAutomaticTuning. The update stage setting the database automatic tuning options.
UpdateStages.WithAutomatic
TuningOptions
SqlDatabaseExportRequest. The stage of the definition which contains all the minimum
DefinitionStages.WithExecute required inputs for execution, but also allows for any other
optional settings to be specified.
SqlDatabaseImportRequest. The stage of the definition which contains all the minimum
DefinitionStages.WithExecute required inputs for execution, but also allows for any other
optional settings to be specified.
SqlDatabaseOperations. The SQL database interface with all starting options for
DefinitionStages.WithAll definition.
DifferentOptions
SqlDatabaseOperations. Sets the authentication type and SQL or Active Directory
DefinitionStages.With administrator login and password.
Authentication
SqlDatabaseOperations. The SQL Database definition to set the collation for database.
DefinitionStages.WithCollation
SqlDatabaseOperations. The SQL Database definition to set the collation for database.
DefinitionStages.WithCollation
AfterElasticPoolOptions
SqlDatabaseOperations. The final stage of the SQL Database definition after the SQL
DefinitionStages.WithCreate Elastic Pool definition.
AfterElasticPoolOptions
SqlDatabaseOperations. The SQL Database definition to set the create mode for
DefinitionStages.WithCreate database.
Mode
SqlDatabaseOperations. The SQL Database definition to set the edition for database.
DefinitionStages.WithEdition
SqlDatabaseOperations. The SQL Database definition to set the edition for database with
DefinitionStages.WithEdition defaults.
Defaults
SqlDatabaseOperations. The SQL Database definition to set the collation for database.
DefinitionStages.WithEdition
Defaults.WithCollation
SqlDatabaseOperations. The SQL Database definition to set the elastic pool for database.
DefinitionStages.WithElastic
PoolName
SqlDatabaseOperations. The SQL Database definition to set the Max Size in Bytes for
DefinitionStages.WithMaxSize database.
Bytes
SqlDatabaseOperations. The SQL Database definition to set the Max Size in Bytes for
DefinitionStages.WithMaxSize database.
BytesAfterElasticPoolOptions
SqlDatabaseOperations. The SQL Database definition to set a restore point as the source
DefinitionStages.WithRestore database.
PointDatabase
SqlDatabaseOperations. The SQL Database definition to set a restore point as the source
DefinitionStages.WithRestore database within an elastic pool.
PointDatabaseAfterElasticPool
SqlDatabaseOperations. The SQL Database definition to set the service level objective.
DefinitionStages.WithService
Objective
SqlDatabaseOperations. The SQL Database definition to set the source database id for
DefinitionStages.WithSource database.
DatabaseId
SqlDatabaseOperations. The stage of the SQL Database rule definition allowing to specify
DefinitionStages.WithSql the parent resource group, SQL server and location.
Server
SqlDatabaseThreatDetection The first stage of the SQL database threat detection policy
Policy.DefinitionStages.Blank definition.
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With security alert policy alerts to be disabled.
AlertsFilter
SqlDatabaseThreatDetection The final stage of the SQL database threat detection policy
Policy.DefinitionStages.With definition.
Create
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With security alert policy email addresses.
EmailAddresses
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set that
Policy.DefinitionStages.With the alert is sent to the account administrators.
EmailToAccountAdmins
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With number of days to keep in the Threat Detection audit logs.
RetentionDays
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With state.
SecurityAlertPolicyState
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With storage access key.
StorageAccountAccessKey
SqlDatabaseThreatDetection The SQL database threat detection policy definition to set the
Policy.DefinitionStages.With storage endpoint.
StorageEndpoint
SqlDatabaseThreatDetection The template for a SQL database threat detection policy update
Policy.Update operation, containing all the settings that can be modified.
SqlDatabaseThreatDetection Grouping of all the SQL database threat detection policy update
Policy.UpdateStages stages.
SqlElasticPool.Definition The SQL Elastic Pool definition to set the maximum DTU for one
Stages.WithDatabaseDtu database.
Max<ParentT>
SqlElasticPool.Definition The SQL Elastic Pool definition to set the minimum DTU for
Stages.WithDatabaseDtu database.
Min<ParentT>
SqlElasticPool.Definition The SQL Elastic Pool definition to set the number of shared DTU
Stages.WithDtu<ParentT> for elastic pool.
SqlElasticPool.Definition The SQL Elastic Pool definition to set the edition for database.
Stages.WithEdition<ParentT>
SqlElasticPool.Definition The SQL Elastic Pool definition to set the eDTU and storage
Stages.WithPremium capacity limits for a premium pool.
Edition<ParentT>
SqlElasticPool.Definition The SQL Elastic Pool definition to set the eDTU and storage
Stages.WithStandard capacity limits for a standard pool.
Edition<ParentT>
SqlElasticPool.Definition The SQL Elastic Pool definition to set the storage limit for the
Stages.WithStorage SQL Azure Database Elastic Pool in MB.
Capacity<ParentT>
SqlElasticPool.Update The template for a SQL Elastic Pool update operation, containing
all the settings that can be modified.
SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to add the Database in the elastic
WithDatabase pool.
SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the maximum DTU for one
WithDatabaseDtuMax database.
SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the minimum DTU for
WithDatabaseDtuMin database.
SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the number of shared DTU
WithDtu for elastic pool.
SqlElasticPool.UpdateStages. The SQL Elastic Pool update definition to set the eDTU and
WithReservedDTUAndStorage storage capacity limits.
Capacity
SqlElasticPool.UpdateStages. The SQL Elastic Pool definition to set the storage limit for the
WithStorageCapacity SQL Azure Database Elastic Pool in MB.
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the eDTU and storage
DefinitionStages.WithBasic capacity limits for a basic pool.
Edition
SqlElasticPoolOperations. The SQL Elastic Pool definition to add the Database in the Elastic
DefinitionStages.With Pool.
Database
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the maximum DTU for one
DefinitionStages.With database.
DatabaseDtuMax
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the minimum DTU for
DefinitionStages.With database.
DatabaseDtuMin
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the number of shared DTU
DefinitionStages.WithDtu for elastic pool.
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the edition type.
DefinitionStages.WithEdition
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the eDTU and storage
DefinitionStages.WithPremium capacity limits for a premium pool.
Edition
SqlElasticPoolOperations. The first stage of the SQL Server Elastic Pool definition.
DefinitionStages.WithSql
Server
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the eDTU and storage
DefinitionStages.WithStandard capacity limits for a standard pool.
Edition
SqlElasticPoolOperations. The SQL Elastic Pool definition to set the storage limit for the
DefinitionStages.WithStorage SQL Azure Database Elastic Pool in MB.
Capacity
SqlElasticPoolOperations.Sql Grouping of the Azure SQL Elastic Pool common actions.
ElasticPoolActionsDefinition
SqlEncryptionProtector. The SQL Encryption Protector update definition to set the server
UpdateStages.WithServerKey key name and type.
NameAndType
SqlFailoverGroup.Update Grouping of all the SQL Virtual Network Rule update stages.
Stages
SqlFailoverGroup.Update The SQL Failover Group update definition to set the partner
Stages.WithDatabase servers.
SqlFailoverGroup.Update The SQL Failover Group update definition to set the failover
Stages.WithReadOnlyEndpoint policy of the read-only endpoint.
Policy
SqlFailoverGroup.Update The SQL Failover Group update definition to set the read-write
Stages.WithReadWrite endpoint failover policy.
EndpointPolicy
SqlFailoverGroupOperations. The SQL Failover Group definition to set the partner servers.
DefinitionStages.With
Database
SqlFailoverGroupOperations. The SQL Failover Group definition to set the partner servers.
DefinitionStages.WithPartner
Server
SqlFailoverGroupOperations. The SQL Failover Group definition to set the failover policy of the
DefinitionStages.WithRead read-only endpoint.
OnlyEndpointPolicy
SqlFailoverGroupOperations. The SQL Failover Group definition to set the read-write endpoint
DefinitionStages.WithRead failover policy.
WriteEndpointPolicy
SqlFirewallRule.Definition The first stage of the SQL Server Firewall Rule definition.
Stages.Blank<ParentT>
SqlFirewallRule.Definition The SQL Firewall Rule definition to set the IP address for the
Stages.With parent SQL Server.
IPAddress<ParentT>
SqlFirewallRule.Definition The SQL Firewall Rule definition to set the IP address range for
Stages.WithIPAddress the parent SQL Server.
Range<ParentT>
SqlFirewallRule.SqlFirewallRule Container interface for all the definitions that need to be
Definition<ParentT> implemented.
SqlFirewallRule.UpdateStages. The SQL Firewall Rule definition to set the starting IP Address for
WithEndIPAddress the server.
SqlFirewallRule.UpdateStages. The SQL Firewall Rule definition to set the starting IP Address for
WithStartIPAddress the server.
SqlFirewallRuleOperations. The SQL Firewall Rule definition to set the IP address range for
DefinitionStages.With the parent SQL Server.
IPAddressRange
SqlFirewallRuleOperations. The first stage of the SQL Server Firewall rule definition.
DefinitionStages.WithSql
Server
SqlFirewallRuleOperations.Sql Grouping of the Azure SQL Server Firewall Rule common actions.
FirewallRuleActionsDefinition
SqlServer.DefinitionStages. The stage of the SQL Server definition allowing to specify the
WithFirewallRule SQL Firewall rules.
SqlServer.DefinitionStages. The stage of the SQL Server definition allowing to specify the
WithVirtualNetworkRule SQL Virtual Network Rules.
SqlServer.UpdateStages.With The stage of the SQL Server update definition allowing to specify
FirewallRule the SQL Firewall rules.
SqlServerAutomaticTuning. The update stage setting the SQL server automatic tuning
UpdateStages.WithAutomatic desired state.
TuningMode
SqlServerAutomaticTuning. The update stage setting the server automatic tuning options.
UpdateStages.WithAutomatic
TuningOptions
SqlServerDnsAliasOperations. Grouping of all the SQL Server DNS alias definition stages.
DefinitionStages
SqlServerDnsAliasOperations. The final stage of the SQL Server DNS alias definition.
DefinitionStages.WithCreate
SqlServerDnsAliasOperations. The first stage of the SQL Server DNS alias definition.
DefinitionStages.WithSql
Server
SqlServerDnsAliasOperations. Grouping of the Azure SQL Server DNS alias common actions.
SqlServerDnsAliasActions
Definition
SqlServerKey.Update The template for a SQL Server Key update operation, containing
all the settings that can be modified.
SqlServerKey.UpdateStages. The SQL Server Key definition to set the server key creation date.
WithCreationDate
SqlServerKey.UpdateStages. The SQL Server Key definition to set the thumbprint.
WithThumbprint
SqlServerKeyOperations. The SQL Server Key definition to set the server key creation date.
DefinitionStages.WithCreation
Date
SqlServerKeyOperations. The SQL Server Key definition to set the server key type.
DefinitionStages.WithServer
KeyType
SqlServerSecurityAlertPolicy. The template for a SQL Server Security Alert Policy update
Update operation, containing all the settings that can be modified.
SqlServerSecurityAlertPolicy. Grouping of all the SQL Server Security Alert Policy update
UpdateStages stages.
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set an
UpdateStages.WithDisabled array of alerts that are disabled.
Alerts
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set if an
UpdateStages.WithEmail alert will be sent to the account administrators.
AccountAdmins
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set an
UpdateStages.WithEmail array of e-mail addresses to which the alert is sent.
Addresses
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set the
UpdateStages.WithRetention number of days to keep in the Threat Detection audit logs.
Days
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to set the
UpdateStages.WithState state.
SqlServerSecurityAlertPolicy. The SQL Server Security Alert Policy update definition to specify
UpdateStages.WithStorage the storage account blob endpoint and access key.
Account
SqlServerSecurityAlertPolicy Grouping of all the SQL Server Security Alert Policy definition
Operations.DefinitionStages stages.
SqlServerSecurityAlertPolicy The final stage of the SQL Server Security Alert Policy definition.
Operations.DefinitionStages.
WithCreate
SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set an array of
Operations.DefinitionStages. alerts that are disabled.
WithDisabledAlerts
SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set if an alert
Operations.DefinitionStages. will be sent to the account administrators.
WithEmailAccountAdmins
SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set an array of
Operations.DefinitionStages. e-mail addresses to which the alert is sent.
WithEmailAddresses
SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set the number
Operations.DefinitionStages. of days to keep in the Threat Detection audit logs.
WithRetentionDays
SqlServerSecurityAlertPolicy The first stage of the SQL Server Security Alert Policy definition.
Operations.DefinitionStages.
WithSqlServer
SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to set the state.
Operations.DefinitionStages.
WithState
SqlServerSecurityAlertPolicy The SQL Server Security Alert Policy definition to specify the
Operations.DefinitionStages. storage account blob endpoint and access key.
WithStorageAccount
SqlServerSecurityAlertPolicy Grouping of the Azure SQL Server Security Alert Policy common
Operations.SqlServerSecurity actions.
AlertPolicyActionsDefinition
SqlSyncGroup.Update The template for a SQL Sync Group update operation, containing
all the settings that can be modified.
SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the conflict resolution
WithConflictResolutionPolicy policy.
SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the database login
WithDatabasePassword password.
SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the database user name.
WithDatabaseUserName
SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the sync frequency.
WithInterval
SqlSyncGroup.UpdateStages. The SQL Sync Group definition to set the database ID to sync
WithSyncDatabaseId with.
SqlSyncGroupOperations. The SQL Sync Group definition to set the database login
DefinitionStages.With password.
DatabasePassword
SqlSyncGroupOperations. The SQL Sync Group definition to set the database user name.
DefinitionStages.With
DatabaseUserName
SqlSyncGroupOperations. The SQL Sync Group definition to set the sync frequency.
DefinitionStages.WithInterval
SqlSyncGroupOperations. The SQL Sync Group definition to set the database ID to sync
DefinitionStages.WithSync with.
DatabaseId
SqlSyncGroupOperations. The SQL Sync Group definition to set the parent database name.
DefinitionStages.WithSync
GroupDatabase
SqlSyncGroupOperations.Sql Grouping of the Azure SQL Server Sync Group common actions.
SyncGroupActionsDefinition
SqlSyncMember.Update The template for a SQL Sync Group update operation, containing
all the settings that can be modified.
SqlSyncMember.Update The SQL Sync Member definition to set the database type.
Stages.WithMemberDatabase
Type
SqlSyncMember.Update The SQL Sync Member definition to set the member database
Stages.WithMemberPassword password.
SqlSyncMember.Update The SQL Sync Member definition to set the member database
Stages.WithMemberUser user name.
Name
SqlSyncMember.Update The SQL Sync Member definition to set the sync direction.
Stages.WithSyncDirection
SqlSyncMemberOperations. The SQL Sync Member definition to set the database type.
DefinitionStages.WithMember
DatabaseType
SqlSyncMemberOperations. The SQL Sync Member definition to set the member database
DefinitionStages.WithMember password.
Password
SqlSyncMemberOperations. The SQL Sync Member definition to set the member database.
DefinitionStages.WithMember
SqlDatabase
SqlSyncMemberOperations. The SQL Sync Member definition to set the member server and
DefinitionStages.WithMember database.
SqlServer
SqlSyncMemberOperations. The SQL Sync Member definition to set the member database
DefinitionStages.WithMember user name.
UserName
SqlSyncMemberOperations. The SQL Sync Member definition to set the sync direction.
DefinitionStages.WithSync
Direction
SqlSyncMemberOperations. The SQL Sync Member definition to set the parent database
DefinitionStages.WithSync name.
GroupName
SqlSyncMemberOperations. The SQL Sync Member definition to set the parent database
DefinitionStages.WithSync name.
MemberDatabase
SqlVirtualNetworkRule. Grouping of all the SQL Virtual Network Rule definition stages.
DefinitionStages
SqlVirtualNetworkRule. The first stage of the SQL Server Virtual Network Rule definition.
DefinitionStages.
Blank<ParentT>
SqlVirtualNetworkRule. The final stage of the SQL Virtual Network Rule definition.
DefinitionStages.With
Attach<ParentT>
SqlVirtualNetworkRule. The SQL Virtual Network Rule definition to set ignore flag for the
DefinitionStages.WithService missing subnet's SQL service endpoint entry.
Endpoint<ParentT>
SqlVirtualNetworkRule. The SQL Virtual Network Rule definition to set the virtual
DefinitionStages.With network ID and the subnet name.
Subnet<ParentT>
SqlVirtualNetworkRule.Update The template for a SQL Virtual Network Rule update operation,
containing all the settings that can be modified.
SqlVirtualNetworkRule.Update Grouping of all the SQL Virtual Network Rule update stages.
Stages
SqlVirtualNetworkRule.Update The SQL Virtual Network Rule definition to set ignore flag for the
Stages.WithServiceEndpoint missing subnet's SQL service endpoint entry.
SqlVirtualNetworkRule.Update The SQL Virtual Network Rule definition to set the virtual
Stages.WithSubnet network ID and the subnet name.
SqlVirtualNetworkRule The final stage of the SQL Virtual Network Rule definition.
Operations.DefinitionStages.
WithCreate
SqlVirtualNetworkRule The SQL Virtual Network Rule definition to set ignore flag for the
Operations.DefinitionStages. missing subnet's SQL service endpoint entry.
WithServiceEndpoint
SqlVirtualNetworkRule The first stage of the SQL Server Virtual Network Rule definition.
Operations.DefinitionStages.
WithSqlServer
SqlVirtualNetworkRule The SQL Virtual Network Rule definition to set the virtual
Operations.DefinitionStages. network ID and the subnet name.
WithSubnet
SqlVirtualNetworkRule Grouping of the Azure SQL Server Virtual Network Rule common
Operations.SqlVirtualNetwork actions.
RuleActionsDefinition
Enums
AuthenticationType Defines values for AuthenticationType.
SqlElasticPoolBasicEDTUs The reserved eDTUs value range for a "Basic" edition of an Azure
SQL Elastic Pool.
SqlElasticPoolBasicMaxEDTUs The maximum limit of the reserved eDTUs value range for a
"Basic" edition of an Azure SQL Elastic Pool.
SqlElasticPoolBasicMinEDTUs The minimum limit of the reserved eDTUs value range for a
"Basic" edition of an Azure SQL Elastic Pool.
SqlElasticPoolPremiumMax The maximum limit of the reserved eDTUs value range for a
EDTUs "Premium" edition of an Azure SQL Elastic Pool.
SqlElasticPoolPremiumMin The minimum limit of the reserved eDTUs value range for a
EDTUs "Premium" edition of an Azure SQL Elastic Pool.
SqlElasticPoolStandardMax The maximum limit of the reserved eDTUs value range for a
EDTUs "Standard" edition of an Azure SQL Elastic Pool.
SqlElasticPoolStandardMin The minimum limit of the reserved eDTUs value range for a
EDTUs "Premium" edition of an Azure SQL Elastic Pool.
The Azure SQL Database REST API includes operations for managing Azure SQL
Database resources.
Backup Short Term Retention Create, get, update, list a database's short term retention
Policies policy.
Data Warehouse User Activities Get and list the user activities of a data warehouse which
includes running and suspended queries.
Database Advanced Threat Create, get, update, list a database's Advanced Threat
Protection Settings Protection state.
Database Security Alert Policies Create, get, update, list a database's security alert policy.
Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assesment Rule Baselines assessment rule baseline.
Database Vulnerability Get, list, execute, export the vulnerability assessment scans of
Assessment Scans a database.
Operation Group Description
Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assessments assessment.
Databases Create, get, update, list, delete, import, export, rename, pause,
resume, upgrade SQL databases.
Elastic Pool Operations Gets a list of operations performed on the elastic pool or
cancels the asynchronous operation on the elastic pool.
Elastic Pools Create, get, update, delete, failover the elastic pools.
Encryption Protectors Get, update, list, revalidate the existing encryption protectors.
Endpoint Certificates Get and list the certificates used on endpoints on the target
instance.
Failover Groups Create, get, update, list, delete, and failover a failover group.
Instance Failover Groups Create, get, update, list, delete, and failover an instance
failover group.
Instance Pools Create, get, update, list, delete the instance pools.
Job Agents Create, get, update, list, delete the job agents.
Job Credentials Create, get, update, list, delete the job credentials.
Job Executions Create, get, update, list, cancel the job executions.
Job Step Executions Get and list the step executions of a job execution.
Job Steps Create, get, update, list, delete job steps for a job's current
version.
Job Target Executions Get or list the target executions of a job step execution.
Job Target Groups Create, get, update, list, delete the job target groups.
Ledger Digest Uploads Create, get, update, list the ledger digest upload configuration
for a database.
Location Capabilities Get the subscription capabilities available for the specified
location.
Operation Group Description
Long Term Retention Backups Create, get, update, list, delete a long term retention backup.
Long Term Retention Managed Create, get, update, list, delete a long term retention backup
Instance Backups for a managed database.
Long Term Retention Policies Get, list, set a database's long term retention policy.
Managed Backup Short Term Create, get, update, list a managed database's short term
Retention Policies retention policy.
Managed Database Security Create, get, update, list the managed database security alert
Alert Policies policies.
Managed Database Sensitivity Create, get, update, list the sensitivity labels of a given
Labels database. Or enable or disable sensitivity recommendations on
a given column.
Managed Database Transparent Create, get, update, list a managed database's transparent
Data Encryption data encryption.
Managed Database Vulnerability Create, get, update, list a managed database's vulnerability
Assessment Rule Baselines assessment rule baseline.
Managed Database Vulnerability Get, list, execute, export a managed database's vulnerability
Assessment Scans assessment scans.
Managed Database Vulnerability Create, get, update, list, delete a managed database's
Assessments vulnerability assessments.
Managed Databases Create, get, update, list, delete, restore the managed
databases.
Operation Group Description
Managed Instance Azure AD Get, set, list, delete the existing server Active Directory only
Only Authentications authentication properties.
Managed Instance Encryption Get, update, list, revalidate the existing encryption protectors
Protectors of a managed instance.
Managed Instance Keys Create, get, update, list, delete the managed instance keys.
Managed Instance Long Term Create, get, list, update the managed instance's long term
Retention Policies retention policies.
Managed Instance Operations Get, list, cancel the operations performed on the managed
instance.
Managed Instance Private Create, get, list, update, delete the private endpoint
Endpoint Connections connections on a managed instance.
Managed Instance Private Link Get or list the private link resources on the managed instance.
Resources
Managed Instance Tde Create a Transparent Data Encryption certificate for a given
Certificates managed instance.
Managed Instance Vulnerability Create, get, list, update, delete the managed instance's
Assessments vulnerability assessment policies.
Managed Instances Create, get, update, list, delete, failover the managed
instances.
Managed Restorable Dropped Create, get, update, list the managed restorable dropped
Database Backup Short Term database's short term retention policies
Retention Policies
Managed Server DNS Aliases Create, get, list, acquire a managed server DNS alias.
Managed Server Security Alert Create, get, list, update the managed server's security alert
Policies policies.
Operations List all of the available SQL Database REST API operations.
Outbound Firewall Rules Create, get, update, list, delete the outbound firewall rules.
Private Endpoint Connections Create, get, update, list, delete the private endpoint
connections on a server.
Private Link Resources Get or list the private link resources for SQL server.
Operation Group Description
Restore Points Create, get, update, list, delete database restore points.
Sensitivity Labels Create, get, update, list the sensitivity labels of a given
database. Or enable or disable sensitivity recommendations on
a given column.
Server Advanced Threat Create, get, update, list the server's Advanced Threat
Protection Settings Protection states.
Server Azure AD Administrators Create, get, list, update, delete Azure Active Directory
administrators in a server.
Server Azure AD Only Create, get, list, update, delete server Active Directory only
Authentications authentication property.
Server Blob Auditing Policies Create, get, update, list an extended server or database's blob
auditing policy.
Server Devops audit setting Create, get, list, update DevOps audit settings of a server.
Server Dns Aliases Create, get, list, acquire or delete a server DNS alias.
Server Security Alert Policies Create, get, list, update a server's security alert policies.
Server Trust Groups Create, get, list, update, delete server trust groups.
Server Vulnerability Assessments Create, get, list, update, delete the server vulnerability
assessment policies.
Sync Agents Create, get, list, update, delete the sync agents. Or generate a
sync agent key.
Sync Groups Create, get, list, update, delete the sync groups. Or refreshes a
hub database schema.
Sync Members Create, get, list, update, delete the sync members.
Transparent Data Encryptions Create, get, list, update a logical database's transparent data
encryption configurations.
Virtual Clusters Create, get, list, update, delete the virtual clusters.
Virtual Network Rules Create, get, list, update, delete the virtual network rules.
Workload Classifiers Create, get, list, update, delete the workload classifiers.
Workload Groups Create, get, list, update, delete the workload groups.
Elastic Pool Database Activities Get the activities for databases in an elastic pool.
Backup Short Term Retention Create, get, update, list a database's short term retention
Policies policy.
Operation Group Description
Data Warehouse User Activities Get and list the user activities of a data warehouse which
includes running and suspended queries.
Database Advanced Threat Create, get, update, list a database's Advanced Threat
Protection Settings Protection state.
Database Security Alert Policies Create, get, update, list a database's security alert policy.
Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assesment Rule Baselines assessment rule baseline.
Database Vulnerability Get, list, execute, export the vulnerability assessment scans of
Assessment Scans a database.
Database Vulnerability Create, get, update, list, delete the database's vulnerability
Assessments assessment.
Databases Create, get, update, list, delete, import, export, rename, pause,
resume, upgrade SQL databases.
Elastic Pool Operations Gets a list of operations performed on the elastic pool or
cancels the asynchronous operation on the elastic pool.
Elastic Pools Create, get, update, delete, failover the elastic pools.
Encryption Protectors Get, update, list, revalidate the existing encryption protectors.
Endpoint Certificates Get and list the certificates used on endpoints on the target
instance.
Operation Group Description
Failover Groups Create, get, update, list, delete, and failover a failover group.
Instance Failover Groups Create, get, update, list, delete, and failover an instance
failover group.
Instance Pools Create, get, update, list, delete the instance pools.
Job Agents Create, get, update, list, delete the job agents.
Job Credentials Create, get, update, list, delete the job credentials.
Job Executions Create, get, update, list, cancel the job executions.
Job Step Executions Get and list the step executions of a job execution.
Job Steps Create, get, update, list, delete job steps for a job's current
version.
Job Target Executions Get or list the target executions of a job step execution.
Job Target Groups Create, get, update, list, delete the job target groups.
Ledger Digest Uploads Create, get, update, list the ledger digest upload configuration
for a database.
Location Capabilities Get the subscription capabilities available for the specified
location.
Long Term Retention Backups Create, get, update, list, delete a long term retention backup.
Long Term Retention Managed Create, get, update, list, delete a long term retention backup
Instance Backups for a managed database.
Long Term Retention Policies Get, list, set a database's long term retention policy.
Managed Backup Short Term Create, get, update, list a managed database's short term
Retention Policies retention policy.
Managed Database Security Create, get, update, list the managed database security alert
Alert Policies policies.
Managed Database Sensitivity Create, get, update, list the sensitivity labels of a given
Labels database. Or enable or disable sensitivity recommendations on
a given column.
Managed Database Transparent Create, get, update, list a managed database's transparent
Data Encryption data encryption.
Managed Database Vulnerability Create, get, update, list a managed database's vulnerability
Assessment Rule Baselines assessment rule baseline.
Managed Database Vulnerability Get, list, execute, export a managed database's vulnerability
Assessment Scans assessment scans.
Managed Database Vulnerability Create, get, update, list, delete a managed database's
Assessments vulnerability assessments.
Managed Databases Create, get, update, list, delete, restore the managed
databases.
Managed Instance Azure AD Get, set, list, delete the existing server Active Directory only
Only Authentications authentication properties.
Managed Instance Encryption Get, update, list, revalidate the existing encryption protectors
Protectors of a managed instance.
Managed Instance Keys Create, get, update, list, delete the managed instance keys.
Managed Instance Long Term Create, get, list, update the managed instance's long term
Retention Policies retention policies.
Operation Group Description
Managed Instance Operations Get, list, cancel the operations performed on the managed
instance.
Managed Instance Private Create, get, list, update, delete the private endpoint
Endpoint Connections connections on a managed instance.
Managed Instance Private Link Get or list the private link resources on the managed instance.
Resources
Managed Instance Tde Create a Transparent Data Encryption certificate for a given
Certificates managed instance.
Managed Instance Vulnerability Create, get, list, update, delete the managed instance's
Assessments vulnerability assessment policies.
Managed Instances Create, get, update, list, delete, failover the managed
instances.
Managed Restorable Dropped Create, get, update, list the managed restorable dropped
Database Backup Short Term database's short term retention policies
Retention Policies
Managed Server DNS Aliases Create, get, list, acquire a managed server DNS alias.
Managed Server Security Alert Create, get, list, update the managed server's security alert
Policies policies.
Operations List all of the available SQL Database REST API operations.
Outbound Firewall Rules Create, get, update, list, delete the outbound firewall rules.
Private Endpoint Connections Create, get, update, list, delete the private endpoint
connections on a server.
Private Link Resources Get or list the private link resources for SQL server.
Restore Points Create, get, update, list, delete database restore points.
Operation Group Description
Sensitivity Labels Create, get, update, list the sensitivity labels of a given
database. Or enable or disable sensitivity recommendations on
a given column.
Server Advanced Threat Create, get, update, list the server's Advanced Threat
Protection Settings Protection states.
Server Azure AD Administrators Create, get, list, update, delete Azure Active Directory
administrators in a server.
Server Azure AD Only Create, get, list, update, delete server Active Directory only
Authentications authentication property.
Server Blob Auditing Policies Create, get, update, list an extended server or database's blob
auditing policy.
Server Devops audit setting Create, get, list, update DevOps audit settings of a server.
Server Dns Aliases Create, get, list, acquire or delete a server DNS alias.
Server Security Alert Policies Create, get, list, update a server's security alert policies.
Server Trust Groups Create, get, list, update, delete server trust groups.
Server Vulnerability Assessments Create, get, list, update, delete the server vulnerability
assessment policies.
Sync Agents Create, get, list, update, delete the sync agents. Or generate a
sync agent key.
Sync Groups Create, get, list, update, delete the sync groups. Or refreshes a
hub database schema.
Sync Members Create, get, list, update, delete the sync members.
Operation Group Description
Transparent Data Encryptions Create, get, list, update a logical database's transparent data
encryption configurations.
Virtual Clusters Create, get, list, update, delete the virtual clusters.
Virtual Network Rules Create, get, list, update, delete the virtual network rules.
Workload Classifiers Create, get, list, update, delete the workload classifiers.
Workload Groups Create, get, list, update, delete the workload groups.
See Also
Azure SQL Database
Azure SQL Data Warehouse
Azure SQL Database Elastic Pool
Latest Stable Version of Azure SQL Database REST API
Microsoft.Sql resource types
Article • 02/13/2023
This article lists the available versions for each resource type.
instancePools 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
locations/deletedServers 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
locations/instanceFailoverGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
locations/longTermRetentionManagedInstances/longTermRetentionDatabases/longTermRetentionManagedInstanceBackups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
locations/managedDatabaseMoveOperationResults 2022-
05-01-
preview
Types Versions
locations/serverTrustGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
locations/timeZones 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
locations/usages 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Types Versions
managedInstances 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
2015-
05-01-
preview
Types Versions
managedInstances/administrators 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
managedInstances/advancedThreatProtectionSettings 2022-
05-01-
preview
2022-
02-01-
preview
Types Versions
managedInstances/azureADOnlyAuthentications 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/databases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2018-
06-01-
preview
2017-
03-01-
preview
managedInstances/databases/advancedThreatProtectionSettings 2022-
05-01-
preview
2022-
02-01-
preview
Types Versions
managedInstances/databases/backupLongTermRetentionPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
managedInstances/databases/backupShortTermRetentionPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
managedInstances/databases/queries 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/databases/restoreDetails 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2018-
06-01-
preview
Types Versions
managedInstances/databases/schemas 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
managedInstances/databases/schemas/tables 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/databases/schemas/tables/columns 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/databases/schemas/tables/columns/sensitivityLabels 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
managedInstances/databases/securityAlertPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
managedInstances/databases/transparentDataEncryption 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/databases/vulnerabilityAssessments 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
managedInstances/databases/vulnerabilityAssessments/rules/baselines 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
managedInstances/databases/vulnerabilityAssessments/scans 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
managedInstances/distributedAvailabilityGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
Types Versions
managedInstances/dnsAliases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
managedInstances/dtc 2022-
05-01-
preview
2022-
02-01-
preview
managedInstances/encryptionProtector 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
managedInstances/endpointCertificates 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
managedInstances/keys 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
managedInstances/operations 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2018-
06-01-
preview
Types Versions
managedInstances/privateEndpointConnections 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
managedInstances/privateLinkResources 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/recoverableDatabases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
managedInstances/restorableDroppedDatabases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
managedInstances/restorableDroppedDatabases/backupShortTermRetentionPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
managedInstances/securityAlertPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
managedInstances/serverTrustCertificates 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
Types Versions
managedInstances/sqlAgent 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
managedInstances/vulnerabilityAssessments 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
servers 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2015-
05-01-
preview
2014-
04-01
Types Versions
servers/administrators 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2018-
06-01-
preview
2014-
04-01
servers/advancedThreatProtectionSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
Types Versions
servers/advisors 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
2014-
04-01
servers/auditingPolicies 2014-
04-01
Types Versions
servers/auditingSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/automaticTuning 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/azureADOnlyAuthentications 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
servers/communicationLinks 2014-
04-01
servers/connectionPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2014-
04-01
Types Versions
servers/databases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2017-
10-01-
preview
2017-
03-01-
preview
2014-
04-01
servers/databases/advancedThreatProtectionSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
Types Versions
servers/databases/advisors 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
2014-
04-01
Types Versions
servers/databases/advisors/recommendedActions 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
servers/databases/auditingPolicies 2014-
04-01
Types Versions
servers/databases/auditingSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
2015-
05-01-
preview
Types Versions
servers/databases/automaticTuning 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Types Versions
servers/databases/backupLongTermRetentionPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/backupShortTermRetentionPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
servers/databases/connectionPolicies 2014-
04-01
servers/databases/dataMaskingPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2014-
04-01
servers/databases/dataMaskingPolicies/rules 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2014-
04-01
Types Versions
servers/databases/dataWarehouseUserActivities 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/extendedAuditingSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/extensions 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2014-
04-01
servers/databases/geoBackupPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2014-
04-01
servers/databases/ledgerDigestUploads 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
Types Versions
servers/databases/replicationLinks 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2014-
04-01
Types Versions
servers/databases/restorePoints 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/schemas 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
servers/databases/schemas/tables 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
servers/databases/schemas/tables/columns 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
Types Versions
servers/databases/schemas/tables/columns/sensitivityLabels 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/securityAlertPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
2014-
04-01
servers/databases/serviceTierAdvisors 2014-
04-01
servers/databases/sqlVulnerabilityAssessments 2022-
05-01-
preview
2022-
02-01-
preview
servers/databases/sqlVulnerabilityAssessments/baselines 2022-
05-01-
preview
2022-
02-01-
preview
servers/databases/sqlVulnerabilityAssessments/baselines/rules 2022-
05-01-
preview
2022-
02-01-
preview
Types Versions
servers/databases/sqlVulnerabilityAssessments/scans 2022-
05-01-
preview
2022-
02-01-
preview
servers/databases/sqlVulnerabilityAssessments/scans/scanResults 2022-
05-01-
preview
2022-
02-01-
preview
servers/databases/syncGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2015-
05-01-
preview
Types Versions
servers/databases/syncGroups/syncMembers 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
2015-
05-01-
preview
Types Versions
servers/databases/transparentDataEncryption 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2014-
04-01
Types Versions
servers/databases/vulnerabilityAssessments 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/vulnerabilityAssessments/rules/baselines 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/databases/vulnerabilityAssessments/scans 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
Types Versions
servers/databases/workloadGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
Types Versions
servers/databases/workloadGroups/workloadClassifiers 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2019-
06-01-
preview
Types Versions
servers/devOpsAuditingSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
servers/disasterRecoveryConfiguration 2014-
04-01
Types Versions
servers/dnsAliases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/elasticPools 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
10-01-
preview
2014-
04-01
servers/elasticPools/databases 2014-
04-01
Types Versions
servers/encryptionProtector 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Types Versions
servers/extendedAuditingSettings 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/failoverGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Types Versions
servers/firewallRules 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
2014-
04-01
servers/ipv6FirewallRules 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
Types Versions
servers/jobAgents 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/credentials 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs/executions 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs/executions/steps 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs/executions/steps/targets 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs/steps 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs/versions 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/jobs/versions/steps 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/jobAgents/targetGroups 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
Types Versions
servers/keys 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
servers/outboundFirewallRules 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
Types Versions
servers/privateEndpointConnections 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
servers/privateLinkResources 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
servers/recommendedElasticPools 2014-
04-01
servers/recommendedElasticPools/databases 2014-
04-01
servers/recoverableDatabases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2014-
04-01
Types Versions
servers/restorableDroppedDatabases 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2014-
04-01
Types Versions
servers/securityAlertPolicies 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2017-
03-01-
preview
servers/serviceObjectives 2014-
04-01
servers/sqlVulnerabilityAssessments 2022-
05-01-
preview
2022-
02-01-
preview
Types Versions
servers/syncAgents 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Types Versions
servers/virtualNetworkRules 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Types Versions
servers/vulnerabilityAssessments 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2018-
06-01-
preview
Types Versions
virtualClusters 2022-
05-01-
preview
2022-
02-01-
preview
2021-
11-01
2021-
11-01-
preview
2021-
08-01-
preview
2021-
05-01-
preview
2021-
02-01-
preview
2020-
11-01-
preview
2020-
08-01-
preview
2020-
02-02-
preview
2015-
05-01-
preview
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
Analytics Platform System (PDW)
To manage your database, you need a tool. Whether your databases run in the cloud, on
Windows, on macOS, or on Linux, your tool doesn't need to run on the same platform as
the database.
You can view the links to the different SQL tools in the following tables.
7 Note
Recommended tools
The following tools provide a graphical user interface (GUI).
A light-weight editor that can run on-demand SQL queries, view and Windows
save results as text, JSON, or Excel. Edit data, organize your favorite macOS
Azure Data
Studio
Manage a SQL Server instance or database with full GUI support. Windows
Access, configure, manage, administer, and develop all components
of SQL Server, Azure SQL Database, and Azure Synapse Analytics.
Provides a single comprehensive utility that combines a broad
SQL Server group of graphical tools with a number of rich script editors to
Management provide access to SQL for developers and database administrators
Studio of all skill levels.
(SSMS)
Tool Description Operating
system
The mssql extension for Visual Studio Code is the official SQL Windows
Server extension that supports connections to SQL Server and rich macOS
editing experience for T-SQL in Visual Studio Code. Write T-SQL Linux
scripts in a light-weight editor.
Visual Studio
Code
Command-line tools
The tools below are the main command-line tools.
bcp The bulk copy program utility (bcp) bulk copies data between an Windows
format. Linux
mssql-cli mssql-cli is an interactive command-line tool for querying SQL Server. Windows
(preview) Also, query SQL Server with a command-line tool that features macOS
(preview) Linux
sqlcmd sqlcmd utility lets you enter Transact-SQL statements, system Windows
Linux
Linux
Tool Description Operating
system
SQL Server SQL Server PowerShell provides cmdlets for working with SQL. Windows
PowerShell macOS
Linux
Tool Description
Configuration Use SQL Server Configuration Manager to configure SQL Server services and
Manager configure network connectivity. Configuration Manager runs on Windows
Data Migration The Data Migration Assistant tool helps you upgrade to a modern data
Assistant platform by detecting compatibility issues that can impact database
functionality in your new version of SQL Server or Azure SQL Database.
Distributed Use the Distributed Replay feature to help you assess the impact of future SQL
Replay Server upgrades. Also use Distributed Replay to help assess the impact of
hardware and operating system upgrades, and SQL Server tuning.
ssbdiagnose The ssbdiagnose utility reports issues in Service Broker conversations or the
configuration of Service Broker services.
SQL Server Use SQL Server Migration Assistant to automate database migration to SQL
Migration Server from Microsoft Access, DB2, MySQL, Oracle, and Sybase.
Assistant
If you're looking for additional tools that aren't mentioned on this page, see SQL
Command Prompt Utilities and Download SQL Server extended features and tools
Download SQL Server Management
Studio (SSMS)
Article • 06/28/2023
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
SQL Endpoint in Microsoft Fabric
Warehouse in
Microsoft Fabric
SQL Server Management Studio (SSMS) is an integrated environment for managing any
SQL infrastructure, from SQL Server to Azure SQL Database. SSMS provides tools to
configure, monitor, and administer instances of SQL Server and databases. Use SSMS to
deploy, monitor, and upgrade the data-tier components used by your applications and
build queries and scripts.
Use SSMS to query, design, and manage your databases and data warehouses, wherever
they are - on your local computer or in the cloud.
Download SSMS
Free Download for SQL Server Management Studio (SSMS) 19.1
SSMS 19.1 is the latest general availability (GA) version. If you have a preview version of
SSMS 19 installed, you should uninstall it before installing SSMS 19.1. If you have SSMS
19.x installed, installing SSMS 19.1 upgrades it to 19.1.
By using SQL Server Management Studio, you agree to its license terms and privacy
statement . If you have comments or suggestions or want to report issues, the best
way to contact the SSMS team is at SQL Server user feedback .
The SSMS 19.x installation doesn't upgrade or replace SSMS versions 18.x or earlier.
SSMS 19.x installs alongside previous versions, so both versions are available for use.
However, if you have an earlier preview version of SSMS 19 installed, you must uninstall
it before installing SSMS 19.1. You can see if you have a preview version by going to the
Help > About window.
If a computer contains side-by-side installations of SSMS, verify you start the correct
version for your specific needs. The latest version is labeled Microsoft SQL Server
Management Studio v19.1.
) Important
Beginning with SQL Server Management Studio (SSMS) 18.7, Azure Data Studio is
automatically installed alongside SSMS. Users of SQL Server Management Studio
are now able to benefit from the innovations and features in Azure Data Studio.
Azure Data Studio is a cross-platform and open-source desktop tool for your
environments, whether in the cloud, on-premises, or hybrid.
To learn more about Azure Data Studio, check out What is Azure Data Studio or
the FAQ.
Available languages
This release of SSMS can be installed in the following languages:
Tip
If you are accessing this page from a non-English language version and want to see
the most up-to-date content, please select Read in English at the top of this page.
You can download different languages from the US-English version site by selecting
available languages.
7 Note
The SQL Server PowerShell module is a separate install through the PowerShell
Gallery. For more information, see Download SQL Server PowerShell Module.
What's new
For details and more information about what's new in this release, see Release notes for
SQL Server Management Studio.
Previous versions
This article is for the latest version of SSMS only. To download previous versions of
SSMS, visit Previous SSMS releases.
7 Note
Connectivity to Azure Analysis Services through Azure Active Directory with MFA
requires SSMS 18.5.1 or later.
Unattended install
You can install SSMS using PowerShell.
Follow the steps below if you want to install SSMS in the background with no GUI
prompts.
PowerShell
Example:
PowerShell
$media_path = "C:\Installers\SSMS-Setup-ENU.exe"
$install_path = "$env:SystemDrive\SSMSto"
You can also pass /Passive instead of /Quiet to see the setup UI.
Uninstall
SSMS may install shared components if it's determined they're missing during SSMS
installation. SSMS won't automatically uninstall these components when you uninstall
SSMS.
Windows 11 (64-bit)
Windows 10 (64-bit) version 1607 (10.0.14393) or later
Windows Server 2022 (64-bit)
Windows Server 2019 (64-bit)
Windows Server 2016 (64-bit)
Supported hardware:
1.8 GHz or faster x86 (Intel, AMD) processor. Dual-core or better recommended
2 GB of RAM; 4 GB of RAM recommended (2.5 GB minimum if running on a virtual
machine)
Hard disk space: Minimum of 2 GB up to 10 GB of available space
7 Note
SSMS is available only as a 32-bit application for Windows. If you need a tool that
runs on operating systems other than Windows, we recommend Azure Data Studio.
Azure Data Studio is a cross-platform tool that runs on macOS, Linux, and
Windows. For details, see Azure Data Studio.
Get help for SQL tools
All the ways to get help
SSMS user feedback .
Submit an Azure Data Studio Git issue
Contribute to Azure Data Studio
SQL Client Tools Forum
SQL Server Data Tools - MSDN forum
Support options for business users
Next steps
SQL tools
SQL Server Management Studio documentation
Azure Data Studio
Download SQL Server Data Tools (SSDT)
Latest updates
Azure Data Architecture Guide
SQL Server Blog
Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.
Applies to:
SQL Server
Azure SQL Database
Azure Synapse Analytics
SQL Server Data Tools (SSDT) is a modern development tool for building SQL Server
relational databases, databases in Azure SQL, Analysis Services (AS) data models,
Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, you
can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.
7 Note
To modify the installed Visual Studio workloads to include SSDT, use the Visual Studio
Installer.
1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.
3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.
For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .
Analysis Services
Integration Services
Reporting Services
Relational databases SQL Server 2016 (13.x) - SQL Server 2022 (16.x)
With Visual Studio 2019, the required functionality to enable Analysis Services,
Integration Services, and Reporting Services projects has moved into the respective
Visual Studio (VSIX) extensions only.
7 Note
1. Launch the Visual Studio Installer. In the Windows Start menu, you can search for
"installer".
2. In the installer, select for the edition of Visual Studio that you want to add SSDT to,
and then choose Modify.
3. Select SQL Server Data Tools under Data storage and processing in the list of
workloads.
For Analysis Services, Integration Services, or Reporting Services projects, you can install
the appropriate extensions from within Visual Studio with Extensions > Manage
Extensions or from the Marketplace .
Analysis Services
Integration Services
Reporting Services
Offline installation
For scenarios where offline installation is required, such as low bandwidth or isolated
networks, SSDT is available for offline installation. Two approaches are available:
For more details you can follow the Step-by-Step Guidelines for Offline Installation
Previous versions
To download and install SSDT for Visual Studio 2017, or an older version of SSDT, see
Previous releases of SQL Server Data Tools (SSDT and SSDT-BI).
See Also
SSDT MSDN Forum
Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback
Contribute to SQL documentation
Did you know that you can edit SQL content yourself? If you do so, not only do you help
improve our documentation, but you also get credited as a contributor to the page.
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Azure Synapse Analytics Analytics Platform System (PDW)
The bulk copy program utility (bcp) bulk copies data between an instance of Microsoft
SQL Server and a data file in a user-specified format.
7 Note
For using bcp on Linux, see Install sqlcmd and bcp on Linux.
For detailed information about using bcp with Azure Synapse Analytics, see Load
data with bcp.
The bcp utility can be used to import large numbers of new rows into SQL Server tables
or to export data out of tables into data files. Except when used with the queryout
option, the utility requires no knowledge of Transact-SQL. To import data into a table,
you must either use a format file created for that table or understand the structure of
the table and the types of data that are valid for its columns.
For the syntax conventions that are used for the bcp syntax, see Transact-SQL syntax
conventions.
7 Note
If you use bcp to back up your data, create a format file to record the data format.
bcp data files do not include any schema or format information, so if a table or
view is dropped and you do not have a format file, you may be unable to import
the data.
Version information
Release number: 15.0.4298.1
Build number: 15.0.4298.1
Release date: April 7, 2023
System requirements
Windows 7, Windows 8, Windows 8.1, Windows 10, Windows 11
Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows
Server 2012 R2, Windows Server 2016, Windows Server 2019, Windows Server 2022
This component requires both Windows Installer 4.5 and the latest Microsoft ODBC
Driver for SQL Server.
To check the bcp version, execute bcp -v command, and confirm that 15.0.4298.1 or
later is in use.
Syntax
Console
[-a packet_size]
[-b batch_size]
[-c]
[-C { ACP | OEM | RAW | code_page } ]
[-d database_name]
[-D]
[-e err_file]
[-E]
[-f format_file]
[-F first_row]
[-G Azure Active Directory Authentication]
[-h"hint [,...n]"]
[-i input_file]
[-k]
[-K application_intent]
[-l login_timeout]
[-L last_row]
[-m max_errors]
[-n]
[-N]
[-o output_file]
[-P password]
[-q]
[-r row_term]
[-R]
[-S [server_name[\instance_name]]]
[-t field_term]
[-T]
[-U login_id]
[-v]
[-V (80 | 90 | 100 | 110 | 120 | 130 | 140 | 150 | 160 ) ]
[-w]
[-x]
Command-line options
database_name
The name of the database in which the specified table or view resides. If not specified,
this is the default database for the user.
schema
The name of the owner of the table or view. schema is optional if the user performing
the operation owns the specified table or view. If schema isn't specified and the user
performing the operation doesn't own the specified table or view, SQL Server returns an
error message, and the operation is canceled.
table_name
The name of the destination table when importing data into SQL Server ( in ), and the
source table when exporting data from SQL Server ( out ).
view_name
The name of the destination view when copying data into SQL Server ( in ), and the
source view when copying data from SQL Server ( out ). Only views in which all columns
refer to the same table can be used as destination views. For more information on the
restrictions for copying data into views, see INSERT (Transact-SQL).
"query"
A Transact-SQL query that returns a result set. If the query returns multiple result sets,
only the first result set is copied to the data file; subsequent result sets are ignored. Use
double quotation marks around the query and single quotation marks around anything
embedded in the query. queryout must also be specified when bulk copying data from a
query.
The query can reference a stored procedure as long as all tables referenced inside the
stored procedure exist prior to executing the bcp statement. For example, if the stored
procedure generates a temp table, the bcp statement fails because the temp table is
available only at run time and not at statement execution time. In this case, consider
inserting the results of the stored procedure into a table and then use bcp to copy the
data from the table into a data file.
in
copies from a file into the database table or view. Specifies the direction of the bulk
copy.
out
Copies from the database table or view to a file. Specifies the direction of the bulk copy.
If you specify an existing file, the file is overwritten. When extracting data, the bcp utility
represents an empty string as a null and a null string as an empty string.
data_file
The full path of the data file. When data is bulk imported into SQL Server, the data file
contains the data to be copied into the specified table or view. When data is bulk
exported from SQL Server, the data file contains the data copied from the table or view.
The path can have from 1 through 255 characters. The data file can contain a maximum
of 2^63 - 1 rows.
queryout
Copies from a query and must be specified only when bulk copying data from a query.
format
Creates a format file based on the option specified ( -n , -c , -w , or -N ) and the table or
view delimiters. When bulk copying data, the bcp command can refer to a format file,
which saves you from reentering format information interactively. The format option
requires the -f option; creating an XML format file, also requires the -x option. For
more information, see Create a Format File (SQL Server). You must specify nul as the
value ( format nul ).
-a packet_size
Specifies the number of bytes, per network packet, sent to and from the server. A server
configuration option can be set by using SQL Server Management Studio (or the
sp_configure system stored procedure). However, the server configuration option can
be overridden on an individual basis by using this option. packet_size can be from 4096
bytes to 65,535 bytes; the default is 4096 .
-b batch_size
Specifies the number of rows per batch of imported data. Each batch is imported and
logged as a separate transaction that imports the whole batch before being committed.
By default, all the rows in the data file are imported as one batch. To distribute the rows
among multiple batches, specify a batch_size that is smaller than the number of rows in
the data file. If the transaction for any batch fails, only insertions from the current batch
are rolled back. Batches already imported by committed transactions are unaffected by a
later failure.
-c
Performs the operation using a character data type. This option doesn't prompt for each
field; it uses char as the storage type, without prefixes and with \t (tab character) as the
field separator and \r\n (newline character) as the row terminator. -c isn't compatible
with -w .
For more information, see Use Character Format to Import or Export Data (SQL Server).
-C { ACP | OEM | RAW | code_page }
Specifies the code page of the data in the data file. code_page is relevant only if the data
contains char, varchar, or text columns with character values greater than 127 or less
than 32.
7 Note
We recommend specifying a collation name for each column in a format file, except
when you want the 65001 option to have priority over the collation/code page
specification.
OEM Default code page used by the client. This is the default code page used if -C isn't
specified.
RAW No conversion from one code page to another occurs. This is the fastest option
because no conversion occurs.
Versions prior to version 13 (SQL Server 2016 (13.x)) don't support code page
65001 (UTF-8 encoding). Versions beginning with 13 can import UTF-8 encoding
to earlier versions of SQL Server.
-d database_name
Specifies the database to connect to. By default, bcp connects to the user's default
database. If -d database_name and a three part name (database_name.schema.table,
passed as the first parameter to bcp) are specified, an error occurs because you can't
specify the database name twice. If database_name begins with a hyphen ( - ) or a
forward slash ( / ), don't add a space between -d and the database name.
-D
Causes the value passed to the bcp -S option to be interpreted as a data source name
(DSN). A DSN may be used to embed driver options to simplify command lines, enforce
driver options that aren't otherwise accessible from the command line such as
MultiSubnetFailover, or to help protect sensitive credentials from being discoverable as
command line arguments. For more information, see DSN Support in sqlcmd and bcp in
Connecting with sqlcmd.
-e err_file
Specifies the full path of an error file used to store any rows that the bcp utility can't
transfer from the file to the database. Error messages from the bcp command go to the
workstation of the user. If this option isn't used, an error file isn't created.
If err_file begins with a hyphen ( - ) or a forward slash ( / ), don't include a space between
-e and the err_file value.
-E
Specifies that identity value or values in the imported data file are to be used for the
identity column. If -E isn't given, the identity values for this column in the data file
being imported are ignored, and SQL Server automatically assigns unique values based
on the seed and increment values specified during table creation. For more information,
see DBCC CHECKIDENT.
If the data file doesn't contain values for the identity column in the table or view, use a
format file to specify that the identity column in the table or view should be skipped
when importing data; SQL Server automatically assigns unique values for the column.
The -E option has a special permissions requirement. For more information, see
"Remarks" later in this article.
-f format_file
Specifies the full path of a format file. The meaning of this option depends on the
environment in which it is used, as follows:
If -f is used with the format option, the specified format_file is created for the
specified table or view. To create an XML format file, also specify the -x option. For
more information, see Create a Format File (SQL Server).
7 Note
Using a format file in with the in or out option is optional. In the absence of
the -f option, if -n , -c , -w , or -N is not specified, the command prompts for
format information and lets you save your responses in a format file (whose
default file name is bcp.fmt ).
-F first_row
Specifies the number of the first row to export from a table or import from a data file.
This parameter requires a value greater than ( > ) 0 but less than ( < ) or equal to ( = ) the
total number rows. In the absence of this parameter, the default is the first row of the
file.
-G
Applies to: Azure SQL Database and Azure Synapse Analytics only.
This switch is used by the client when connecting to Azure SQL Database or Azure
Synapse Analytics to specify that the user be authenticated using Azure Active Directory
authentication. The -G switch requires version 14.0.3008.27 or later versions. To
determine your version, execute bcp -v . For more information, see Use Azure Active
Directory Authentication for authentication with SQL Database or Azure Synapse
Analytics.
) Important
Tip
To check if your version of bcp includes support for Azure Active Directory (Azure
AD) Authentication, type bcp --help and verify that you see -G in the list of
available arguments.
Azure Active Directory Username and Password
When you want to use an Azure Active Directory user name and password, you can
provide the -G option and also use the user name and password by providing the
-U and -P options.
The following example exports data using Azure AD username and password
credentials. The example exports table bcptest from database testdb from Azure
server aadserver.database.windows.net and stores the data in file
c:\last\data1.dat :
The following example imports data using Azure AD Username and Password
where user and password are an Azure AD credential. The example imports data
from file c:\last\data1.dat into table bcptest for database testdb on Azure
server aadserver.database.windows.net using Azure AD User/Password:
The following example exports data using Azure AD-Integrated account. The
example exports table bcptest from database testdb using Azure AD Integrated
from Azure server aadserver.database.windows.net and stores the data in file
c:\last\data2.dat :
The following example imports data using Azure AD-Integrated auth. The example
imports data from file c:\last\data2.txt into table bcptest for database testdb
on Azure server aadserver.database.windows.net using Azure AD Integrated auth:
The Azure AD Interactive authentication for Azure SQL Database and Azure
Synapse Analytics, allows you to use an interactive method supporting multi-factor
authentication. For more information, see Active Directory Interactive
Authentication.
The following example exports data using Azure AD interactive mode indicating
username where user represents an Azure AD account. This is the same example
used in the previous section: Azure Active Directory Username and Password.
In case an Azure AD user is a domain federated one using Windows account, the
user name required in the command line, contains its domain account (for
example, joe@contoso.com ):
If guest users exist in a specific Azure AD and are part of a group that exists in SQL
Database that has database permissions to execute the bcp command, their guest
user alias is used (for example, keith0@adventure-works.com ).
The sort order of the data in the data file. Bulk import performance is improved if
the data being imported is sorted according to the clustered index on the table, if
any. If the data file is sorted in a different order, that is other than the order of a
clustered index key, or if there is no clustered index on the table, the ORDER clause
is ignored. The column names supplied must be valid column names in the
destination table. By default, bcp assumes the data file is unordered. For optimized
bulk import, SQL Server also validates that the imported data is sorted.
ROWS_PER_BATCH = bb
Number of rows of data per batch (as bb). Used when -b isn't specified, resulting
in the entire data file being sent to the server as a single transaction. The server
optimizes the bulkload according to the value bb. By default, ROWS_PER_BATCH is
unknown.
KILOBYTES_PER_BATCH = cc
TABLOCK
Specifies that a bulk update table-level lock is acquired for the duration of the
bulkload operation; otherwise, a row-level lock is acquired. This hint significantly
improves performance because holding a lock for the duration of the bulk-copy
operation reduces lock contention on the table. A table can be loaded concurrently
by multiple clients if the table has no indexes and TABLOCK is specified. By default,
locking behavior is determined by the table option table lock on bulkload.
7 Note
CHECK_CONSTRAINTS
Specifies that all constraints on the target table or view must be checked during
the bulk-import operation. Without the CHECK_CONSTRAINTS hint, any CHECK,
and FOREIGN KEY constraints are ignored, and after the operation the constraint
on the table is marked as not-trusted.
7 Note
UNIQUE, PRIMARY KEY, and NOT NULL constraints are always enforced.
At some point, you need to check the constraints on the entire table. If the table
was nonempty before the bulk import operation, the cost of revalidating the
constraint may exceed the cost of applying CHECK constraints to the incremental
data. Therefore, we recommend that normally you enable constraint checking
during an incremental bulk import.
A situation in which you might want constraints disabled (the default behavior) is if
the input data contains rows that violate constraints. With CHECK constraints
disabled, you can import the data and then use Transact-SQL statements to
remove data that isn't valid.
7 Note
bcp now enforces data validation and data checks that might cause scripts to
fail if they're executed on invalid data in a data file.
7 Note
FIRE_TRIGGERS
Specified with the in argument, any insert triggers defined on the destination
table will run during the bulk-copy operation. If FIRE_TRIGGERS isn't specified, no
insert triggers will run. FIRE_TRIGGERS is ignored for the out , queryout , and
format arguments.
-i input_file
Specifies the name of a response file, containing the responses to the command prompt
questions for each data field when a bulk copy is being performed using interactive
mode ( -n , -c , -w , or -N not specified).
-k
Specifies that empty columns should retain a null value during the operation, rather
than have any default values for the columns inserted. For more information, see Keep
Nulls or Use Default Values During Bulk Import (SQL Server).
-K application_intent
Declares the application workload type when connecting to a server. The only value that
is possible is ReadOnly. If -K isn't specified, the bcp utility doesn't support connectivity
to a secondary replica in an Always On availability group. For more information, see
Active Secondaries: Readable Secondary Replicas (Always On Availability Groups).
-l login_timeout
Specifies a login timeout. The -l option specifies the number of seconds before a login
to SQL Server times out when you try to connect to a server. The default login timeout is
15 seconds. The login timeout must be a number between 0 and 65534. If the value
supplied isn't numeric or doesn't fall into that range, bcp generates an error message. A
value of 0 specifies an infinite timeout.
-L last_row
Specifies the number of the last row to export from a table or import from a data file.
This parameter requires a value greater than ( > ) 0 but less than ( < ) or equal to ( = ) the
number of the last row. In the absence of this parameter, the default is the last row of
the file.
-m max_errors
Specifies the maximum number of syntax errors that can occur before the bcp operation
is canceled. A syntax error implies a data conversion error to the target data type. The
max_errors total excludes any errors that can be detected only at the server, such as
constraint violations.
A row that can't be copied by the bcp utility is ignored and is counted as one error. If
this option isn't included, the default is 10.
7 Note
The -m option also does not apply to converting the money or bigint data types.
-n
Performs the bulk-copy operation using the native (database) data types of the data.
This option doesn't prompt for each field; it uses the native values.
For more information, see Use Native Format to Import or Export Data (SQL Server).
-N
Performs the bulk-copy operation using the native (database) data types of the data for
noncharacter data, and Unicode characters for character data. This option offers a
higher performance alternative to the -w option, and is intended for transferring data
from one instance of SQL Server to another using a data file. It doesn't prompt for each
field. Use this option when you are transferring data that contains ANSI extended
characters and you want to take advantage of the performance of native mode.
For more information, see Use Unicode Native Format to Import or Export Data (SQL
Server).
If you export and then import data to the same table schema by using bcp with -N , you
might see a truncation warning if there is a fixed length, non-Unicode character column
(for example, char(10)).
The warning can be ignored. One way to resolve this warning is to use -n instead of -N .
-o output_file
Specifies the name of a file that receives output redirected from the command prompt.
-P password
Specifies the password for the login ID. If this option isn't used, the bcp command
prompts for a password. If this option is used at the end of the command prompt
without a password, bcp uses the default password (NULL).
) Important
To mask your password, don't specify the -P option along with the -U option. Instead,
after specifying bcp along with the -U option and other switches (don't specify -P ),
press ENTER, and the command will prompt you for a password. This method ensures
that your password is masked when it is entered.
If password begins with a hyphen ( - ) or a forward slash ( / ), don't add a space between
-P and the password value.
-q
Executes the SET QUOTED_IDENTIFIERS ON statement in the connection between the
bcp utility and an instance of SQL Server. Use this option to specify a database, owner,
table, or view name that contains a space or a single quotation mark. Enclose the entire
three-part table or view name in quotation marks ("").
To specify a database name that contains a space or single quotation mark, you must
use the -q option.
If you specify the row terminator in hexadecimal notation in a bcp command, the value
is truncated at 0x00 . For example, if you specify 0x410041 , 0x41 is used.
-R
Specifies that currency, date, and time data is bulk copied into SQL Server using the
regional format defined for the locale setting of the client computer. By default, regional
settings are ignored.
-S server_name [\instance_name]
Specifies the instance of SQL Server to which to connect. If no server is specified, the
bcp utility connects to the default instance of SQL Server on the local computer. This
option is required when a bcp command is run from a remote computer on the network
or a local named instance. To connect to the default instance of SQL Server on a server,
specify only server_name. To connect to a named instance of SQL Server, specify
server_name**\**instance_name.
-t field_term
Specifies the field terminator. The default is \t (tab character). Use this parameter to
override the default field terminator. For more information, see Specify Field and Row
Terminators (SQL Server).
If you specify the field terminator in hexadecimal notation in a bcp command, the value
is truncated at 0x00 . For example, if you specify 0x410041 , 0x41 is used.
-T
Specifies that the bcp utility connects to SQL Server with a trusted connection using
integrated security. The security credentials of the network user, login_id, and password
aren't required. If -T isn't specified, you need to specify -U and -P to successfully log
in.
) Important
When the bcp utility is connecting to SQL Server with a trusted connection using
integrated security, use the -T option (trusted connection) instead of the user
name and password combination. When the bcp utility is connecting to SQL
Database or Azure Synapse Analytics, using Windows authentication or Azure
Active Directory authentication is not supported. Use the -U and -P options.
-U login_id
Specifies the login ID used to connect to SQL Server.
) Important
When the bcp utility is connecting to SQL Server with a trusted connection using
integrated security, use the -T option (trusted connection) instead of the user
name and password combination. When the bcp utility is connecting to SQL
Database or Azure Synapse Analytics, using Windows authentication or Azure
Active Directory authentication is not supported. Use the -U and -P options.
-v
Reports the bcp utility version number and copyright.
100 = SQL Server 2008 (10.0.x) and SQL Server 2008 R2 (10.50.x)
110 = SQL Server 2012 (11.x)
For example, to generate data for types not supported by SQL Server 2000 (8.x), but
were introduced in later versions of SQL Server, use the -V80 option.
For more information, see Import Native and Character Format Data from Earlier
Versions of SQL Server.
-w
Performs the bulk copy operation using Unicode characters. This option doesn't prompt
for each field; it uses nchar as the storage type, no prefixes, \t (tab character) as the field
separator, and \n (newline character) as the row terminator. -w isn't compatible with -c .
For more information, see Use Unicode Character Format to Import or Export Data (SQL
Server).
-x
This option is used with the format and -f format_file options, and generates an XML-
based format file instead of the default non-XML format file. The -x doesn't work when
importing or exporting data. It generates an error if used without both format and -f
format_file.
Remarks
The bcp 13.0 client is installed when you install Microsoft SQL Server 2019 (15.x)
tools. If tools are installed for multiple versions of SQL Server, depending on the
order of values of the PATH environment variable, you might be using the earlier
bcp client instead of the bcp 13.0 client. This environment variable defines the set
of directories used by Windows to search for executable files. To discover which
version you are using, run the bcp -v command at the Windows Command
Prompt. For information about how to set the command path in the PATH
environment variable, see Environment Variables or search for Environment
Variables in Windows Help.
To make sure the newest version of the bcp utility is running, you need to remove
any older versions of the bcp utility.
To determine where all versions of the bcp utility are installed, type in the
command prompt:
where bcp.exe
The bcp utility can also be downloaded separately from the Microsoft SQL Server
2016 Feature Pack . Select either ENU\x64\MsSqlCmdLnUtils.msi or
ENU\x86\MsSqlCmdLnUtils.msi .
XML format files are only supported when SQL Server tools are installed together
with SQL Server Native Client.
For information about where to find or how to run the bcp utility and about the
command prompt utilities syntax conventions, see Command Prompt Utility
Reference (Database Engine).
For information on preparing data for bulk import or export operations, see
Prepare Data for Bulk Export or Import (SQL Server).
For information about when row-insert operations that are performed by bulk
import are logged in the transaction log, see Prerequisites for Minimal Logging in
Bulk Import.
The characters < , > , | , & , and ^ are special command shell characters, and they
must be preceded by the escape character ( ^ ), or enclosed in quotation marks
when used in String (for example, "StringContaining&Symbol" ). If you use
quotation marks to enclose a string that contains one of the special characters, the
quotation marks are set as part of the environment variable value.
Computed and timestamp columns are bulk copied from SQL Server to a data file as
usual.
When you specify an identifier or file name that includes a space or quotation
mark at the command prompt, enclose the identifier in quotation marks ("").
For example, the following bcp out command creates a data file named Currency
Types.dat :
To specify a database name that contains a space or quotation mark, you must use
the -q option.
For owner, table, or view names that contain embedded spaces or quotation
marks, you can either:
Enclose the owner, table, or view name in brackets ( [] ) inside the quotation
marks.
Data validation
bcp now enforces data validation and data checks that might cause scripts to fail if
they're executed on invalid data in a data file. For example, bcp now verifies that:
Forms of invalid data that could be bulk imported in earlier versions of SQL Server might
fail to load now; whereas, in earlier versions, the failure didn't occur until a client tried to
access the invalid data. The added validation minimizes surprises when querying the
data after bulkload.
SQLCHAR or The data is sent in the client code page or in the code page implied by
SQLVARYCHAR the collation). The effect is the same as specifying the -c switch without
specifying a format file.
SQLNCHAR or The data is sent as Unicode. The effect is the same as specifying the -w
SQLNVARCHAR switch without specifying a format file.
Permissions
A bcp out operation requires SELECT permission on the source table.
7 Note
Disabling constraints is the default behavior. To enable constraints explicitly,
use the -h option with the CHECK_CONSTRAINTS hint.
7 Note
By default, triggers are not fired. To fire triggers explicitly, use the -h option
with the FIRE_TRIGGERS hint.
You use the -E option to import identity values from a data file.
7 Note
Requiring ALTER TABLE permission on the target table was new in SQL Server 2005
(9.x). This new requirement might cause bcp scripts that do not enforce triggers
and constraint checks to fail if the user account lacks ALTER table permissions for
the target table.
(Administrator) Verify data when using BCP OUT. For example, when you use BCP
OUT, BCP IN, and then BCP OUT verify that the data is properly exported and the
terminator values aren't used as part of some data value. Consider overriding the
default terminators (using -t and -r options) with random hexadecimal values to
avoid conflicts between terminator values and data values.
(User) Use a long and unique terminator (any sequence of bytes or characters) to
minimize the possibility of a conflict with the actual string value. This can be done
by using the -t and -r options.
Examples
The examples in this section make use of the WideWorldImporters sample database for
SQL Server 2016 (13.x) and later versions, Azure SQL Database, and Azure SQL Managed
Instance. WideWorldImporters can be downloaded from
https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-
v1.0 . See RESTORE (Transact-SQL) for the syntax to restore the sample database.
constraint. Run the following T-SQL script in SQL Server Management Studio (SSMS)
SQL
USE WideWorldImporters;
GO
7 Note
bcp -v
native format. The example also: specifies the maximum number of syntax errors,
an error file, and an output file.
file named StockItemTransactions_character.bcp and copies the table data into it using
character format.
The example assumes that you use mixed-mode authentication, and you must use the -
U switch to specify your login ID. Also, unless you are connecting to the default instance
of SQL Server on the local computer, use the -S switch to specify the system name and,
optionally, an instance name.
At a command prompt, enter the following command: (The system prompts you for
your password.)
previously.
bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN
D:\BCP\StockItemTransactions_character.bcp -c -T
bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN
D:\BCP\StockItemTransactions_native.bcp -b 5000 -h "TABLOCK" -m 1 -n -e
D:\BCP\Error_in.log -o D:\BCP\Output_in.log -S -T
7 Note
7 Note
7 Note
To use the -x switch, you must be using a bcp 9.0 client. For information about
how to use the bcp 9.0 client, see "Remarks."
For more information, see Non-XML Format Files (SQL Server) and XML Format Files
(SQL Server).
7 Note
bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp in
D:\BCP\StockItemTransactions_character.bcp -L 100 -f
D:\BCP\StockItemTransactions_c.xml -T
7 Note
Format files are useful when the data file fields are different from the table
columns; for example, in their number, ordering, or data types. For more
information, see Format Files for Importing or Exporting Data (SQL Server).
1. Create a table dbo.T1 in the tempdb database, with two columns, ID and Name .
SQL
USE tempdb;
GO
2. Generate an output file from the example table dbo.T1 , using a custom field
terminator.
In this example, the server name is MYSERVER , and the custom field terminator is
specified by -t , .
Output
1,Natalia
2,Mark
3,Randolph
3. Generate an output file from the example table dbo.T1 , using a custom field
terminator and custom row terminator.
In this example, the server name is MYSERVER , the custom field terminator is
specified by -t , , and the custom row terminator is specified by -r : .
Output
1,Natalia:2,Mark:3,Randolph:
7 Note
The row terminator is always added, even to the last record. The field
terminator, however, isn't added to the last field.
Additional examples
The following articles contain examples of using bcp:
Keep Nulls or Use Default Values During Bulk Import (SQL Server)
Next steps
Prepare Data for Bulk Export or Import (SQL Server)
BULK INSERT (Transact-SQL)
OPENROWSET (Transact-SQL)
SET QUOTED_IDENTIFIER (Transact-SQL)
sp_configure (Transact-SQL)
sp_tableoption (Transact-SQL)
Format Files for Importing or Exporting Data (SQL Server)
Get help
Ideas for SQL: Have suggestions for improving SQL Server?
Microsoft Q & A (SQL Server)
DBA Stack Exchange (tag sql-server): Ask SQL Server questions
Stack Overflow (tag sql-server): Answers to SQL development questions
Reddit: General discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
Additional SQL Server help and feedback
Applies to:
SQL Server
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
Analytics Platform System (PDW)
The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script
files through various modes:
7 Note
For SQL Server 2014 (12.x) and previous versions, see sqlcmd utility.
For using sqlcmd on Linux, see Install sqlcmd and bcp on Linux.
Windows
Download Microsoft Command Line Utilities 15 for SQL Server (x64)
Download Microsoft Command Line Utilities 15 for SQL Server (x86)
The command line tools are General Availability (GA), however they're being released
with the installer package for SQL Server 2019 (15.x).
Version information
Release number: 15.0.4298.1
Build number: 15.0.4298.1
Release date: April 7, 2023
The new version of sqlcmd supports Azure Active Directory (Azure AD) authentication,
including Multi-Factor Authentication (MFA) support for Azure SQL Database, Azure
Synapse Analytics, and Always Encrypted features.
System requirements
Windows 7 through Windows 11
Windows Server 2008 through Windows Server - 2022
This component requires both the built-in Windows Installer 5 and the Microsoft ODBC
Driver 17 for SQL Server.
Check version
To check the sqlcmd version, execute the sqlcmd -? command and confirm that
15.0.4298.1, or a later version, is in use.
7 Note
You need version 13.1 or higher to support Always Encrypted ( -g ) and Azure AD
authentication ( -G ). You may have several versions of sqlcmd installed on your
computer. Be sure you are using the correct version. To determine the version,
execute sqlcmd -? .
Preinstalled
) Important
SQL Server Management Studio (SSMS) uses the Microsoft .NET Framework
SqlClient for execution in regular and SQLCMD mode in Query Editor. When
sqlcmd is run from the command-line, sqlcmd uses the ODBC driver. Because
different default options may apply, you might see different behavior when you
execute the same query in SQL Server Management Studio in SQLCMD Mode and
in the sqlcmd utility.
Syntax
Console
sqlcmd
-a packet_size
-c batch_terminator
-d db_name
-D
-e (echo input)
-h rows_per_header
-H workstation_name
-i input_file
-K application_intent
-l login_timeout
-m error_level
-M multisubnet_failover
-N (encrypt connection)
-o output_file
-P password
-q "cmdline query"
-s col_separator
-S [protocol:]server[instance_name][,port]
-t query_timeout
-U login_id
-v var = "value"
-V error_severity_level
-w screen_width
-y variable_length_type_display_width
-Y fixed_length_type_display_width
-z new_password
-? (usage)
Currently, sqlcmd doesn't require a space between the command-line option and the
value. However, in a future release, a space may be required between the command-line
option and the value.
Command-line options
Login-related options
-A
Signs in to SQL Server with a dedicated administrator connection (DAC). This kind of
connection is used to troubleshoot a server. This connection works only with server
computers that support DAC. If DAC isn't available, sqlcmd generates an error message,
and then exits. For more information about DAC, see Diagnostic Connection for
Database Administrators. The -A option isn't supported with the -G option. When
connecting to Azure SQL Database using -A , you must be an administrator on the
logical SQL server. DAC isn't available for an Azure AD administrator.
-C
This option is used by the client to configure it to implicitly trust the server certificate
without validation. This option is equivalent to the ADO.NET option
TRUSTSERVERCERTIFICATE = true .
-d db_name
Issues a USE <db_name> statement when you start sqlcmd. This option sets the sqlcmd
scripting variable SQLCMDDBNAME . This parameter specifies the initial database. The default
is your login's default-database property. If the database doesn't exist, an error message
is generated and sqlcmd exits.
-D
Interprets the server name provided to -S as a DSN instead of a hostname. For more
information, see DSN support in sqlcmd and bcp in Connecting with sqlcmd.
7 Note
The -D option is only available on Linux and macOS clients. On Windows clients, it
previously referred to a now-obsolete option which has been removed and is
ignored.
-l login_timeout
Specifies the number of seconds before a sqlcmd login to the ODBC driver times out
when you try to connect to a server. This option sets the sqlcmd scripting variable
SQLCMDLOGINTIMEOUT . The default time-out for login to sqlcmd is 8 seconds. When using
the -G option to connect to Azure SQL Database or Azure Synapse Analytics and
authenticate using Azure AD, a timeout value of at least 30 seconds is recommended.
The login time-out must be a number between 0 and 65534 . If the value supplied isn't
numeric, or doesn't fall into that range, sqlcmd generates an error message. A value of
0 specifies time-out to be infinite.
-E
Uses a trusted connection instead of using a user name and password to sign in to SQL
Server. By default, without -E specified, sqlcmd uses the trusted connection option.
The -E option ignores possible user name and password environment variable settings
such as SQLCMDPASSWORD . If the -E option is used together with the -U option or the -P
option, an error message is generated.
-g
Sets the Column Encryption setting to Enabled . For more information, see Always
Encrypted. Only master keys stored in Windows Certificate Store are supported. The -g
option requires at least sqlcmd version 13.1 . To determine your version, execute
sqlcmd -? .
-G
This option is used by the client when connecting to Azure SQL Database or Azure
Synapse Analytics to specify that the user be authenticated using Azure AD
authentication. This option sets the sqlcmd scripting variable SQLCMDUSEAAD = true . The
-G option requires at least sqlcmd version 13.1 . To determine your version, execute
sqlcmd -? . For more information, see Connecting to SQL Database or Azure Synapse
Analytics By Using Azure Active Directory Authentication. The -A option isn't supported
with the -G option.
The -G option only applies to Azure SQL Database and Azure Synapse Analytics.
When you want to use an Azure AD user name and password, you can provide the
-G option with the user name and password, by using the -U and -P options.
Console
Output
SERVER =
Target_DB_or_DW.testsrv.database.windows.net;UID=bob@contoso.com;PWD=My
AzureADPassword;AUTHENTICATION=ActiveDirectoryPassword;
Console
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G
Output
SERVER =
Target_DB_or_DW.testsrv.database.windows.net;Authentication=ActiveDirec
toryIntegrated;Trusted_Connection=NO;
7 Note
The Azure AD interactive authentication for Azure SQL Database and Azure
Synapse Analytics, allows you to use an interactive method supporting multi-factor
authentication. For more information, see Active Directory Interactive
Authentication.
The following example exports data using Azure AD interactive mode, indicating a
username where the user represents an Azure AD account. This is the same
example used in the previous section, Azure Active Directory username and
password.
Console
sqlcmd -S testsrv.database.windows.net -d Target_DB_or_DW -G -U
alice@aadtest.onmicrosoft.com
The previous command generates the following connection string in the backend:
Output
SERVER =
Target_DB_or_DW.testsrv.database.windows.net;UID=alice@aadtest.onmicros
oft.com;AUTHENTICATION=ActiveDirectoryInteractive
In case an Azure AD user is a domain federated user using a Windows account, the
user name required in the command-line contains its domain account (for example
joe@contoso.com ):
Console
If guest users exist in a specific Azure AD tenant, and are part of a group that exists
in Azure SQL Database that has database permissions to execute the sqlcmd
command, their guest user alias is used (for example, keith0@adventureworks.com ).
) Important
There is a known issue when using the -G and -U option with sqlcmd, where
putting the -U option before the -G option may cause authentication to fail.
Always start with the -G option followed by the -U option.
-H workstation_name
A workstation name. This option sets the sqlcmd scripting variable SQLCMDWORKSTATION .
The workstation name is listed in the hostname column of the sys.sysprocesses catalog
view, and can be returned using the stored procedure sp_who . If this option isn't
specified, the default is the current computer name. This name can be used to identify
different sqlcmd sessions.
-j
Prints raw error messages to the screen.
-K application_intent
Declares the application workload type when connecting to a server. The only currently
supported value is ReadOnly . If -K isn't specified, sqlcmd doesn't support connectivity
to a secondary replica in an availability group. For more information, see Active
Secondaries: Readable Secondary Replica (Always On Availability Groups).
-M multisubnet_failover
Always specify -M when connecting to the availability group listener of a SQL Server
availability group or a SQL Server Failover Cluster Instance. -M provides for faster
detection of and connection to the (currently) active server. If -M isn't specified, -M is
off. For more information about Listeners, Client Connectivity, Application Failover,
Creation and Configuration of Availability Groups (SQL Server), Failover Clustering and
Always On Availability Groups (SQL Server), and Active Secondaries: Readable Secondary
Replicas(Always On Availability Groups).
-N
-P password
) Important
The password prompt is displayed by printing the password prompt to the console, as
follows: Password:
User input is hidden. This means that nothing is displayed and the cursor stays in
position.
The SQLCMDPASSWORD environment variable lets you set a default password for the current
session. Therefore, passwords don't have to be hard-coded into batch files. The
following example first sets the SQLCMDPASSWORD variable at the command prompt and
then accesses the sqlcmd utility.
Console
SET SQLCMDPASSWORD=p@a$$w0rd
Console
sqlcmd
If the user name and password combination is incorrect, an error message is generated.
7 Note
The OSQLPASSWORD environment variable has been kept for backward compatibility.
The SQLCMDPASSWORD environment variable takes precedence over the OSQLPASSWORD
environment variable. This means that sqlcmd and osql can be used next to each
other without interference. Old scripts will continue to work.
If the -P option is followed by more than one argument, an error message is generated
and the program exits.
-S [protocol:]server[\instance_name][,port]
Specifies the instance of SQL Server to which to connect. It sets the sqlcmd scripting
variable SQLCMDSERVER .
Specify server_name to connect to the default instance of SQL Server on that server
computer. Specify server_name[\instance_name] to connect to a named instance of SQL
Server on that server computer. If no server computer is specified, sqlcmd connects to
the default instance of SQL Server on the local computer. This option is required when
you execute sqlcmd from a remote computer on the network.
If you don't specify a server_name[\instance_name] when you start sqlcmd, SQL Server
checks for and uses the SQLCMDSERVER environment variable.
7 Note
The OSQLSERVER environment variable has been kept for backward compatibility.
The SQLCMDSERVER environment variable takes precedence over the OSQLSERVER
environment variable. This means that sqlcmd and osql can be used next to each
other without interference. Old scripts will continue to work.
-U login_id
The login name or contained database user name. For contained database users, you
must provide the database name option ( -d ).
7 Note
The OSQLUSER environment variable has been kept for backward compatibility. The
SQLCMDUSER environment variable takes precedence over the OSQLUSER environment
variable. This means that sqlcmd and osql can be used next to each other without
interference. Old scripts will continue to work.
If you don't specify either the -U option or the -P option, sqlcmd tries to connect by
using Windows Authentication mode. Authentication is based on the Windows account
of the user who is running sqlcmd.
If the -U option is used with the -E option (described later in this article), an error
message is generated. If the -U option is followed by more than one argument, an error
message is generated and the program exits.
-z new_password
Console
sqlcmd -U someuser -P s0mep@ssword -z a_new_p@a$$w0rd
-Z new_password
Console
Input/output options
Specifies the input and output code pages. The codepage number is a numeric value
that specifies an installed Windows code page.
If no code pages are specified, sqlcmd uses the current code page for both input
and output files, unless the input file is a Unicode file, in which case no conversion
is required.
If no output file is specified, the output code page is the console code page. This
approach enables the output to be displayed correctly on the console.
Multiple input files are assumed to be of the same code page. Unicode and non-
Unicode input files can be mixed.
Enter chcp at the command prompt to verify the code page of cmd.exe .
-i input_file[,input_file2...]
Identifies the file that contains a batch of Transact-SQL statements or stored procedures.
Multiple files may be specified that are read and processed in order. Don't use any
spaces between file names. sqlcmd checks first to see whether all the specified files
exist. If one or more files don't exist, sqlcmd exits. The -i and the -Q / -q options are
mutually exclusive.
Path examples:
Console
-i C:\<filename>
-i \\<Server>\<Share$>\<filename>
-i "C:\Some Folder\<file name>"
Console
-o output_file
Identifies the file that receives output from sqlcmd.
If -u is specified, the output_file is stored in Unicode format. If the file name isn't valid,
an error message is generated, and sqlcmd exits. sqlcmd doesn't support concurrent
writing of multiple sqlcmd processes to the same file. The file output will be corrupted
or incorrect. The -f option is also relevant to file formats. This file is created if it doesn't
exist. A file of the same name from a prior sqlcmd session is overwritten. The file
specified here isn't the stdout file. If a stdout file is specified, this file isn't used.
Path examples:
Console
-o C:< filename>
-o \\<Server>\<Share$>\<filename>
-o "C:\Some Folder\<file name>"
-r[0 | 1]
Redirects the error message output to the screen ( stderr ). If you don't specify a
parameter or if you specify 0 , only error messages that have a severity level of 11 or
higher are redirected. If you specify 1 , all error message output including PRINT is
redirected. This option has no effect if you use -o . By default, messages are sent to
stdout .
-R
Causes sqlcmd to localize numeric, currency, date, and time columns retrieved from SQL
Server based on the client's locale. By default, these columns are displayed using the
server's regional settings.
-u
-e
Writes input scripts to the standard output device ( stdout ).
-I
Sets the SET QUOTED_IDENTIFIER connection option to ON . By default, it's set to OFF . For
more information, see SET QUOTED_IDENTIFIER (Transact-SQL).
-q "cmdline query"
Executes a query when sqlcmd starts, but doesn't exit sqlcmd when the query has
finished running. Multiple-semicolon-delimited queries can be executed. Use quotation
marks around the query, as shown in the following example.
Console
) Important
-Q "cmdline query"
Executes a query when sqlcmd starts and then immediately exits sqlcmd. Multiple-
semicolon-delimited queries can be executed.
Use quotation marks around the query, as shown in the following example.
Console
) Important
-t query_timeout
Specifies the number of seconds before a command (or Transact-SQL statement) times
out. This option sets the sqlcmd scripting variable SQLCMDSTATTIMEOUT . If a query_timeout
value isn't specified, the command doesn't time out. The query_timeout must be a
number between 1 and 65534 . If the value supplied isn't numeric or doesn't fall into
that range, sqlcmd generates an error message.
7 Note
The actual time out value may vary from the specified query_timeout value by
several seconds.
Console
-x
Causes sqlcmd to ignore scripting variables. This parameter is useful when a script
contains many INSERT statements that may contain strings that have the same format as
regular variables, such as $(<variable_name>) .
Format options
-h headers
Specifies the number of rows to print between the column headings. The default is to
print headings one time for each set of query results. This option sets the sqlcmd
scripting variable SQLCMDHEADERS . Use -1 to specify that headers not be printed. Any
value that isn't valid causes sqlcmd to generate an error message and then exit.
-k [1 | 2]
Removes all control characters, such as tabs and new line characters from the output.
This parameter preserves column formatting when data is returned. If 1 is specified, the
control characters are replaced by a single space. If 2 is specified, consecutive control
characters are replaced by a single space. -k is the same as -k1 .
-s col_separator
Specifies the column-separator character. The default is a blank space. This option sets
the sqlcmd scripting variable SQLCMDCOLSEP . To use characters that have special meaning
to the operating system, such as the ampersand ( & ) or semicolon ( ; ), enclose the
character in quotation marks ( " ). The column separator can be any 8-bit character.
-w screen_width
Specifies the screen width for output. This option sets the sqlcmd scripting variable
SQLCMDCOLWIDTH . The column width must be a number greater than 8 and less than
65536 . If the specified column width doesn't fall into that range, sqlcmd generates an
error message. The default width is 80 characters. When an output line exceeds the
specified column width, it wraps on to the next line.
-W
This option removes trailing spaces from a column. Use this option together with the -s
option when preparing data that is to be exported to another application. Can't be used
with the -y or -Y options.
-y variable_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXVARTYPEWIDTH . The default is 256 . It limits
the number of characters that are returned for the large variable length data types:
varchar(max)
nvarchar(max)
varbinary(max)
xml
user-defined data types (UDTs)
text
ntext
image
UDTs can be of fixed length depending on the implementation. If this length of a fixed
length UDT is shorter that display_width, the value of the UDT returned isn't affected.
However, if the length is longer than display_width, the output is truncated.
U Caution
Use the -y 0 option with extreme caution, because it may cause significant
performance issues on both the server and the network, depending on the size of
data returned.
-Y fixed_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXFIXEDTYPEWIDTH . The default is 0 (unlimited).
Limits the number of characters that are returned for the following data types:
-b
Specifies that sqlcmd exits and returns a DOS ERRORLEVEL value when an error occurs.
The value that is returned to the ERRORLEVEL variable is 1 when the SQL Server error
message has a severity level greater than 10; otherwise, the value returned is 0 . If the -
V option has been set in addition to -b , sqlcmd won't report an error if the severity
level is lower than the values set using -V . Command prompt batch files can test the
value of ERRORLEVEL and handle the error appropriately. sqlcmd doesn't report errors for
severity level 10 (informational messages).
If the sqlcmd script contains an incorrect comment, syntax error, or is missing a scripting
variable, the ERRORLEVEL returned is 1 .
-m error_level
Controls which error messages are sent to stdout . Messages that have a severity level
greater than or equal to this level are sent. When this value is set to -1 , all messages
including informational messages, are sent. Spaces aren't allowed between the -m and
-1 . For example, -m-1 is valid, and -m -1 isn't.
This option also sets the sqlcmd scripting variable SQLCMDERRORLEVEL . This variable has a
default of 0 .
-V error_severity_level
Controls the severity level that is used to set the ERRORLEVEL variable. Error messages
that have severity levels greater than or equal to this value set ERRORLEVEL . Values that
are less than 0 are reported as 0 . Batch and CMD files can be used to test the value of
the ERRORLEVEL variable.
Miscellaneous options
-a packet_size
Requests a packet of a different size. This option sets the sqlcmd scripting variable
SQLCMDPACKETSIZE . packet_size must be a value between 512 and 32767 . The default is
4096 . A larger packet size can enhance performance for execution of scripts that have
lots of Transact-SQL statements between GO commands. You can request a larger packet
size. However, if the request is denied, sqlcmd uses the server default for packet size.
-c batch_terminator
Specifies the batch terminator. By default, commands are terminated and sent to SQL
Server by typing the word GO on a line by itself. When you reset the batch terminator,
don't use Transact-SQL reserved keywords or characters that have special meaning to
the operating system, even if they're preceded by a backslash.
-L[c]
Lists the locally configured server computers, and the names of the server computers
that are broadcasting on the network. This parameter can't be used in combination with
other parameters. The maximum number of server computers that can be listed is 3000.
If the server list is truncated because of the size of the buffer a warning message is
displayed.
7 Note
If the optional parameter c is specified, the output appears without the Servers:
header line, and each server line is listed without leading spaces. This presentation is
referred to as clean output. Clean output improves the processing performance of
scripting languages.
-p[1]
Prints performance statistics for every result set. The following display is an example of
the format for performance statistics:
Output
x xact[s]:
Where:
If the optional parameter 1 is specified, the output format of the statistics is in colon-
separated format that can be imported easily into a spreadsheet or processed by a
script.
If the optional parameter is any value other than 1 , an error is generated and sqlcmd
exits.
-X[1]
Disables commands that might compromise system security when sqlcmd is executed
from a batch file. The disabled commands are still recognized; sqlcmd issues a warning
message and continues. If the optional parameter 1 is specified, sqlcmd generates an
error message and then exits. The following commands are disabled when the -X
option is used:
ED
!! command
-?
Displays the version of sqlcmd and a syntax summary of sqlcmd options.
Remarks
Options don't have to be used in the order shown in the syntax section.
When multiple results are returned, sqlcmd prints a blank line between each result set in
a batch. In addition, the <x> rows affected message doesn't appear when it doesn't
apply to the statement executed.
To use sqlcmd interactively, type sqlcmd at the command prompt with any one or more
of the options described earlier in this article. For more information, see Use the sqlcmd
Utility
7 Note
The total length of the sqlcmd command-line in the command environment (for
example cmd.exe or bash ), including all arguments and expanded variables, is
determined by the underlying operating system.
7 Note
To view the environmental variables, in Control Panel, open System, and then select
the Advanced tab.
SQLCMDUSER -U R ""
SQLCMDPASSWORD -P -- ""
SQLCMDSERVER -S R "DefaultLocalInstance"
SQLCMDWORKSTATION -H R "ComputerName"
SQLCMDDBNAME -d R ""
SQLCMDPACKETSIZE -a R "4096"
SQLCMDERRORLEVEL -m R/W 0
SQLCMDINI R ""
R/W indicates that the value can be modified by using the :setvar command and
subsequent commands are influenced by the new value.
sqlcmd commands
In addition to Transact-SQL statements within sqlcmd, the following commands are also
available:
GO [ count ]
:List
[:]RESET
:Error
[:]ED
:Out
[:]!!
:Perftrace
[:]QUIT
:Connect
[:]EXIT
:On Error
:r
:Help
:ServerList
:XML [ ON | OFF ]
:Setvar
:Listvar
) Important
sqlcmd commands are recognized only if they appear at the start of a line.
Commands are executed immediately. They aren't put in the execution buffer as
Transact-SQL statements are.
Editing commands
[:]ED
Starts the text editor. This editor can be used to edit the current Transact-SQL batch, or
the last executed batch. To edit the last executed batch, the ED command must be typed
immediately after the last batch has completed execution.
The text editor is defined by the SQLCMDEDITOR environment variable. The default editor
is 'Edit'. To change the editor, set the SQLCMDEDITOR environment variable. For example,
to set the editor to Microsoft Notepad, at the command prompt, type:
SET SQLCMDEDITOR=notepad
[:]RESET
:List
Prints the content of the statement cache.
Variables
Defines sqlcmd scripting variables. Scripting variables have the following format:
$(VARNAME) .
Implicitly using a command-line option. For example, the -l option sets the
SQLCMDLOGINTIMEOUT sqlcmd variable.
7 Note
If a variable defined by using :Setvar and an environment variable have the same
name, the variable defined by using :Setvar takes precedence.
Variable names can't have the same form as a variable expression, such as $(var) .
If the string value of the scripting variable contains blank spaces, enclose the value in
quotation marks. If a value for a scripting variable isn't specified, the scripting variable is
dropped.
:Listvar
Displays a list of the scripting variables that are currently set.
7 Note
Only scripting variables that are set by sqlcmd, and those that are set using the
:Setvar command will be displayed.
Output commands
Redirect all error output to the file specified by filename, to stderr or to stdout . The
:Error command can appear multiple times in a script. By default, error output is sent
to stderr .
filename
Creates and opens a file that receives the output. If the file already exists, it is
truncated to zero bytes. If the file isn't available because of permissions or other
reasons, the output won't be switched and is sent to the last specified or default
destination.
STDERR
Switches error output to the stderr stream. If this has been redirected, the target
to which the stream has been redirected receives the error output.
STDOUT
Switches error output to the stdout stream. If this has been redirected, the target
to which the stream has been redirected receives the error output.
Creates and redirects all performance trace information to the file specified by file name,
to stderr or to stdout . By default performance trace output is sent to stdout . If the file
already exists, it is truncated to zero bytes. The :Perftrace command can appear
multiple times in a script.
Execution control commands
When the exit option is used, sqlcmd exits with the appropriate error value.
When the ignore option is used, sqlcmd ignores the error and continues executing the
batch or script. By default, an error message is printed.
[:]QUIT
Causes sqlcmd to exit.
[:]EXIT [ ( statement ) ]
Lets you use the result of a SELECT statement as the return value from sqlcmd. If
numeric, the first column of the last result row is converted to a 4-byte integer (long).
MS-DOS, Linux, and macOS pass the low byte to the parent process or operating system
error level. Windows 2000 and later versions passes the whole 4-byte integer. The syntax
is :EXIT(query) .
For example:
text
:EXIT(SELECT @@ROWCOUNT)
You can also include the :EXIT parameter as part of a batch file. For example, at the
command prompt, type:
The sqlcmd utility sends everything between the parentheses ( () ) to the server. If a
system stored procedure selects a set and returns a value, only the selection is returned.
The :EXIT() statement with nothing between the parentheses executes everything
before it in the batch, and then exits without a return value.
Doesn't execute the batch, and then quits immediately and returns no value.
:EXIT( )
:EXIT(query)
Executes the batch that includes the query, and then quits after it returns the
results of the query.
If RAISERROR is used within a sqlcmd script, and a state of 127 is raised, sqlcmd will quit
and return the message ID back to the client. For example:
text
This error causes the sqlcmd script to end and return the message ID 50001 to the
client.
The return values -1 to -99 are reserved by SQL Server, and sqlcmd defines the
following additional return values:
GO [count]
GO signals both the end of a batch and the execution of any cached Transact-SQL
statements. The batch is executed multiple times as separate batches. You can't declare
a variable more than once in a single batch.
Miscellaneous commands
:r <filename>
Parses additional Transact-SQL statements and sqlcmd commands from the file
specified by filename into the statement cache. filename is read relative to the startup
directory in which sqlcmd was run.
If the file contains Transact-SQL statements that aren't followed by GO , you must enter
GO on the line that follows :r .
The file will be read and executed after a batch terminator is encountered. You can issue
multiple :r commands. The file may include any sqlcmd command. This includes the
batch terminator GO .
7 Note
The line count that is displayed in interactive mode will be increased by one for
every :r command encountered. The :r command will appear in the output of the
list command.
:ServerList
Lists the locally configured servers and the names of the servers broadcasting on the
network.
Time-out options:
Value Behavior
0 Wait forever
If timeout isn't specified, the value of the SQLCMDLOGINTIMEOUT variable is the default.
text
:connect myserver\instance1
To connect to the default instance of myserver using scripting variables, you would use
the following:
text
[:]!! command
Executes operating system commands. To execute an operating system command, start
a line with two exclamation marks ( !! ) followed by the operating system command. For
example:
text
:!! dir
7 Note
:XML [ ON | OFF ]
For more information, see XML Output Format and JSON Output Format in this article.
:Help
:Error , :Out and :Perftrace should use separate filename values. If the same
filename is used, inputs from the commands may be intermixed.
If an input file that is located on a remote server is called from sqlcmd on a local
computer, and the file contains a drive file path such as :Out c:\OutputFile.txt ,
the output file is created on the local computer and not on the remote server.
Each new sqlcmd session overwrites existing files that have the same names.
Informational messages
sqlcmd prints any informational message that is sent by the server. In the following
example, after the Transact-SQL statements are executed, an informational message is
printed.
Console
sqlcmd
Console
USE AdventureWorks2022;
GO
When you press ENTER , the following informational message is printed: "Changed
database context to 'AdventureWorks2022'."
This line is followed by a separator line that is a series of dash characters. The following
output shows an example.
Console
USE AdventureWorks2022;
FROM Person.Person;
GO
Output
(2 row(s) affected)
Although the BusinessEntityID column is only four characters wide, it has been
expanded to accommodate the longer column name. By default, output is terminated at
80 characters. This can be changed by using the -w option, or by setting the
SQLCMDCOLWIDTH scripting variable.
When you expect XML output, use the following command: :XML ON .
7 Note
sqlcmd returns error messages in the usual format. The error messages are also
output in the XML text stream in XML format. By using :XML ON , sqlcmd does not
display informational messages.
To set the XML mode to off, use the following command: :XML OFF .
The GO command shouldn't appear before the :XML OFF command is issued, because
the :XML OFF command switches sqlcmd back to row-oriented output.
XML (streamed) data and rowset data can't be mixed. If the :XML ON command hasn't
been issued before a Transact-SQL statement that outputs XML streams is executed, the
output is garbled. Once the :XML ON command has been issued, you can't execute
Transact-SQL statements that output regular row sets.
7 Note
The :XML command does not support the SET STATISTICS XML statement.
To set the XML mode to off, use the following command: :XML OFF .
Console
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G -l 30
Set time-out values for batch or query execution higher than you expect it will take
to execute the batch or query.
Use -V16 to log any severity 16 level messages. Severity 16 messages indicate
general errors that can be corrected by the user.
Check the exit code and DOS ERRORLEVEL variable after the process has exited.
sqlcmd will return 0 normally, otherwise it sets the ERRORLEVEL as configured by -
V . In other words, ERRORLEVEL shouldn't be expected to be the same value as the
error number reported from SQL Server. The error number is a SQL Server-specific
value corresponding to the system function @@ERROR. ERRORLEVEL is a sqlcmd-
specific value to indicate why sqlcmd terminated, and its value is influenced by
specifying -b command line argument.
Using -V16 in combination with checking the exit code and DOS ERRORLEVEL can help
catch errors in automated environments, particularly quality gates before a production
release.
Next steps
Start the sqlcmd Utility
Run Transact-SQL Script Files Using sqlcmd
Use the sqlcmd Utility
Use sqlcmd with Scripting Variables
Connect to the Database Engine With sqlcmd
Edit SQLCMD Scripts with Query Editor
Manage Job Steps
Create a CmdExec Job Step
SqlPackage
Article • 05/11/2023
SqlPackage is a command-line utility that automates the following database development tasks by
exposing some of the public Data-Tier Application Framework (DacFx) APIs:
Version: Returns the build number of the SqlPackage application. Added in version 18.6.
Extract: Creates a data-tier application (.dacpac) file containing the schema or schema and user
data from a connected SQL database.
Publish: Incrementally updates a database schema to match the schema of a source .dacpac file.
If the database does not exist on the server, the publish operation creates it. Otherwise, an
existing database is updated.
Export: Exports a connected SQL database - including database schema and user data - to a
BACPAC file (.bacpac).
Import: Imports the schema and table data from a BACPAC file into a new user database.
DeployReport: Creates an XML report of the changes that would be made by a publish action.
DriftReport: Creates an XML report of the changes that have been made to a registered
database since it was last registered.
Script: Creates a Transact-SQL incremental update script that updates the schema of a target to
match the schema of a source.
The SqlPackage command line tool allows you to specify these actions along with action-specific
parameters and properties.
Download the latest version. For details about the latest release, see the release notes.
Command-Line Syntax
SqlPackage initiates the actions specified using the parameters, properties, and SQLCMD variables
specified on the command line.
Bash
Exit codes
SqlPackage commands return the following exit codes:
0 = success
non-zero = failure
Usage example
Further examples are available on the individual action pages.
SqlPackage /TargetFile:"C:\sqlpackageoutput\output_current_version.dacpac"
/Action:Extract /SourceServerName:"." /SourceDatabaseName:"Contoso.Database"
Parameters
Some parameters are shared between the SqlPackage actions. Below is a table summarizing the
parameters, for more information click into the specific action pages.
/AccessToken: /at x x x x x x x
/ClientId: /cid x
/DeployScriptPath: /dsp x x
/DeployReportPath: /drp x x
/Diagnostics: /d x x x x x x x
/DiagnosticsFile: /df x x x x x x x
/MaxParallelism: /mp x x x x x x x
/OutputPath: /op x x x
/OverwriteFiles: /of x x x x x x
/Profile: /pr x x x
/Properties: /p x x x x x x
/Quiet: /q x x x x x x x
/Secret: /secr x
/SourceConnectionString: /scs x x x x x
/SourceDatabaseName: /sdn x x x x x
/SourceEncryptConnection: /sec x x x x x
/SourceFile: /sf x x x x
/SourcePassword: /sp x x x x x
/SourceServerName: /ssn x x x x x
Parameter Short Extract Publish Export Import DeployReport DriftReport Script
Form
/SourceTimeout: /st x x x x x
/SourceTrustServerCertificate: /stsc x x x x x
/SourceUser: /su x x x x x
/TargetConnectionString: /tcs x x x x
/TargetDatabaseName: /tdn x x x x x
/TargetEncryptConnection: /tec x x x x x
/TargetFile: /tf x x x x
/TargetPassword: /tp x x x x x
/TargetServerName: /tsn x x x x x
/TargetTimeout: /tt x x x x x
/TargetTrustServerCertificate: /ttsc x x x x x
/TargetUser: /tu x x x x x
/TenantId: /tid x x x x x x x
/UniversalAuthentication: /ua x x x x x x x
/Variables: /v x x
Properties
SqlPackage actions support a large number of properties to modify the default behavior of an action.
For more information click into the specific action pages.
Utility commands
Version
Displays the sqlpackage version as a build number. Can be used in interactive prompts as well as in
automated pipelines.
SqlPackage /Version
Help
You can display SqlPackage usage information by using /? or /help:True .
Windows Command Prompt
SqlPackage /?
For parameter and property information specific to a particular action, use the help parameter in
addition to that action's parameter.
SqlPackage /Action:Publish /?
Authentication
SqlPackage authenticates using methods available in SqlClient. Configuring the authentication type
can be accomplished via the connection string parameters for each SqlPackage action
( /SourceConnectionString and /TargetConnectionString ) or through individual parameters for
connection properties. The following authentication methods are supported in a connection string:
Managed identity
In automated environments Azure Active Directory Managed identity is the recommended
authentication method. This method does not require passing credentials to SqlPackage at runtime.
The managed identity is configured for the environment where the SqlPackage action is run and the
SqlPackage action will use that identity to authenticate to Azure SQL. For more information on
configuring Managed identity for your environment, please see the Managed identity documentation.
Bash
Environment variables
Connection pooling
Connection pooling can be enabled for all connections made by SqlPackage by setting the
CONNECTION_POOLING_ENABLED environment variable to True . This setting is recommended for
operations with Azure Active Directory username/password connections to avoid MSAL throttling.
Temporary files
During SqlPackage operations the table data is written to temporary files before compression or after
decompression. For large databases these temporary files can take up a significant amount of disk
space but their location can be specified. The export and extract operations include an optional
property to specify /p:TempDirectoryForTableData to override the SqlPackage's default value.
For Windows, the following environment variables are checked in the following order and the first
path that exists is used:
For Linux and macOS, if the path is not specified in the TMPDIR environment variable, the default
path /tmp/ is used.
SqlPackage may collect standard computer, use, and performance information that may be
transmitted to Microsoft and analyzed to improve the quality, security, and reliability of SqlPackage.
SqlPackage doesn't collect user specific or personal information. To help approximate a single user for
diagnostic purposes, SqlPackage will generate a random GUID for each computer it runs on and use
that value for all events it sends.
For details, see the Microsoft Privacy Statement , and SQL Server Privacy supplement.
Disable telemetry reporting
To disable telemetry collection and reporting, update the environment variable
DACFX_TELEMETRY_OPTOUT to true or 1 .
Support
The DacFx library and the SqlPackage CLI tool have adopted the Microsoft Modern Lifecycle Policy .
All security updates, fixes, and new features will be released only in the latest point version of the
major version. Maintaining your DacFx or SqlPackage installations to the current version helps ensure
that you will receive all applicable bug fixes in a timely manner.
Next steps
Learn more about SqlPackage Extract
Learn more about SqlPackage Publish
Learn more about SqlPackage Export
Learn more about SqlPackage Import
Connection modules for Microsoft SQL
Database
Article • 07/19/2023
This article provides download links to connection modules or drivers that your client
programs can use for interacting with Microsoft SQL Server, and with its twin in the
cloud Azure SQL Database. Drivers are available for a variety of programming
languages, running on the following operating systems:
Linux
macOS
Windows
OOP-to-relational mismatch:
ORM: Other drivers or frameworks return queried data in the OOP format, avoiding the
mismatch. These drivers work by expecting that classes have been defined to match the
data columns of particular SQL tables. The driver then performs the object-relational
mapping (ORM) to return queried data as an instance of a class. Microsoft's Entity
Framework (EF) for C#, and Hibernate for Java, are two examples.
The present article devotes separate sections to these two kinds of connection drivers.
C# ADO.NET
Microsoft.Data.SqlClient
.NET Core for: Linux-Ubuntu, macOS, Windows
Entity Framework Core
Entity Framework
C++ ODBC
OLE DB
Language Download the SQL driver
Java JDBC
PHP PHP
Go GORM
Python Django
SQL Server backend for Django
Build-an-app webpages
https://aka.ms/sqldev takes you to a set of Build-an-app webpages. The webpages
provide information about numerous combinations of programming language,
operating system, and SQL connection driver. Among the information provided by the
Build-an-app webpages are the following items:
Details about how to get started from the very beginning, for each combination of
language + operating system + driver.
Instructions for installing the latest SQL connection drivers.
Code examples for each of the following items:
Object-relational code examples.
ORM code examples.
Columnstore index demonstrations for much faster performance.
Related links
Code examples for connecting to Azure SQL Database in the cloud, with Java and
other languages.
Connection modules for Microsoft SQL
Database
Article • 07/19/2023
This article provides download links to connection modules or drivers that your client
programs can use for interacting with Microsoft SQL Server, and with its twin in the
cloud Azure SQL Database. Drivers are available for a variety of programming
languages, running on the following operating systems:
Linux
macOS
Windows
OOP-to-relational mismatch:
ORM: Other drivers or frameworks return queried data in the OOP format, avoiding the
mismatch. These drivers work by expecting that classes have been defined to match the
data columns of particular SQL tables. The driver then performs the object-relational
mapping (ORM) to return queried data as an instance of a class. Microsoft's Entity
Framework (EF) for C#, and Hibernate for Java, are two examples.
The present article devotes separate sections to these two kinds of connection drivers.
C# ADO.NET
Microsoft.Data.SqlClient
.NET Core for: Linux-Ubuntu, macOS, Windows
Entity Framework Core
Entity Framework
C++ ODBC
OLE DB
Language Download the SQL driver
Java JDBC
PHP PHP
Go GORM
Python Django
SQL Server backend for Django
Build-an-app webpages
https://aka.ms/sqldev takes you to a set of Build-an-app webpages. The webpages
provide information about numerous combinations of programming language,
operating system, and SQL connection driver. Among the information provided by the
Build-an-app webpages are the following items:
Details about how to get started from the very beginning, for each combination of
language + operating system + driver.
Instructions for installing the latest SQL connection drivers.
Code examples for each of the following items:
Object-relational code examples.
ORM code examples.
Columnstore index demonstrations for much faster performance.
Related links
Code examples for connecting to Azure SQL Database in the cloud, with Java and
other languages.
Microsoft ADO.NET for SQL Server and
Azure SQL Database
Article • 03/20/2023
Download ADO.NET
ADO.NET is the core data access technology for .NET languages. Use the
Microsoft.Data.SqlClient library or Entity Framework to access SQL Server, or providers
from other suppliers to access their stores. Use System.Data.Odbc or System.Data.OleDb
to access data from .NET languages using other data access technologies. Use
System.Data.DataSet when you need an offline data cache in client applications. It also
provides local persistence and XML capabilities that can be useful in web services.
Documentation
ADO.NET Overview
Getting started with the SqlClient driver
Overview of the SqlClient driver
Data type mappings in ADO.NET
Retrieving and modifying data in ADO.NET
SQL Server and ADO.NET
Community
ADO.NET Managed Providers Forum
ADO.NET DataSet Forum
More samples
ADO.NET Code Examples
Getting Started with .NET Framework on Windows
Getting Started with .NET Core on macOS
Getting Started with .NET Core on Ubuntu
Getting Started with .NET Core on Red Hat Enterprise Linux (RHEL)
Microsoft JDBC Driver for SQL Server
Article • 03/03/2023
Download JDBC driver
The Microsoft JDBC Driver for SQL Server has been tested against major application
servers such as IBM WebSphere and SAP NetWeaver.
Getting started
Step 1: Configure development environment for Java development
Step 2: Create a SQL database for Java development
Step 3: Proof of concept connecting to SQL using Java
Documentation
Getting Started
Overview
Programming Guide
Security
Performance and Reliability
Troubleshooting
Code Samples
Compliance and Legal
Community
Feedback and finding additional JDBC driver information
Download
Download Microsoft JDBC Driver for SQL Server - has additional information about
Maven projects, and more.
Samples
Sample JDBC driver applications
Getting started with Java on Windows
Getting started with Java on macOS
Getting started with Java on Ubuntu
Getting started with Java on Red Hat Enterprise Linux (RHEL)
Getting started with Java on SUSE Linux Enterprise Server (SLES)
Node.js Driver for SQL Server
Article • 11/18/2022
Download Node.js SQL driver
You can connect to a SQL Database using Node.js on Windows, Linux, or macOS.
Get started
Step 1: Configure development environment for Node.js development
Step 2: Create a SQL database for Node.js development
Step 3: Proof of concept connecting to SQL using Node.js
Documentation
Tedious module documentation on GitHub
Support
Tedious for Node.js is community-supported software. Microsoft contributes to the
tedious open-source community and is an active participant in the repository at
To get help, file an issue in the tedious GitHub repository or visit other Node.js
community resources.
Community resources
Azure Node.js Developer Center
Get Involved at nodejs.org
Code examples
Getting Started with Node.js on Windows
Getting Started with Node.js on macOS
Getting Started with Node.js on Ubuntu
Getting Started with Node.js on Red Hat Enterprise Linux (RHEL)
Getting Started with Node.js on SUSE Linux Enterprise Server (SLES)
Microsoft ODBC Driver for SQL Server
Article • 06/15/2023
Version: 18.2.2.1
Download ODBC driver
ODBC is the primary native data access API for applications written in C and C++ for
SQL Server. There's an ODBC driver for most data sources. Other languages that can use
ODBC include COBOL, Perl, PHP, and Python. ODBC is widely used in data integration
scenarios.
The ODBC driver comes with tools such as sqlcmd and bcp. The sqlcmd utility lets you
run Transact-SQL statements, system procedures, and SQL scripts. The bcp utility bulk
copies data between an instance of Microsoft SQL Server and a data file in a format you
choose. You can use bcp to import many new rows into SQL Server tables or to export
data out of tables into data files.
Download
Download ODBC driver
Documentation
Features
Connection Resiliency
Custom Keystore Providers
Data Classification
DSN and Connection String Keywords and Attributes
SQL Server Native Client (the features available also apply, without OLEDB, to the
ODBC Driver for SQL Server)
Using Always Encrypted
Using Azure Active Directory
Using Transparent Network IP Resolution
Using XA Transactions
Windows
Asynchronous Execution (Notification Method) Sample
Driver-Aware Connection Pooling
Features and Behavior Changes
Release Notes for ODBC to SQL Server on Windows
System Requirements, Installation, and Driver Files
Community
SQL Server Drivers blog
SQL Server Data Access Forum
Microsoft Drivers for PHP for SQL
Server
Article • 11/18/2022
Download PHP driver
The Microsoft Drivers for PHP for SQL Server enable integration with SQL Server for PHP
applications. The drivers are PHP extensions that allow the reading and writing of SQL
Server data from within PHP scripts. The drivers provide interfaces for accessing data in
Azure SQL Database and in all editions of SQL Server 2005 and later (including Express
Editions). The drivers make use of PHP features, including PHP streams, to read and
write large objects.
Getting Started
Step 1: Configure development environment for PHP development
Step 2: Create a database for PHP development
Step 3: Proof of concept connecting to SQL using PHP
Step 4: Connect resiliently to SQL with PHP
Documentation
Getting Started
Overview
Programming Guide
Security Considerations
Community
Support Resources for the Microsoft Drivers for PHP for SQL Server
Download
Download drivers for PHP for SQL
Samples
Code Samples for the Microsoft Drivers for PHP for SQL Server
Getting Started with PHP on Windows
Getting Started with PHP on macOS
Getting Started with PHP on Ubuntu
Getting Started with PHP on Red Hat Enterprise Linux (RHEL)
Getting Started with PHP on SUSE Linux Enterprise Server (SLES)
Python SQL driver
Article • 11/18/2022
Install SQL driver for Python
You can connect to a SQL Database using Python on Windows, Linux, or macOS.
Getting started
There are several python SQL drivers available. However, Microsoft places its testing
efforts and its confidence in pyodbc driver. Choose one of the following drivers, and
configure your development environment:
Documentation
For documentation, see Python documentation at Python.org .
Community
Azure Python Developer Center
python.org Community
Next steps
Explore samples that use Python to connect to a SQL database in the following articles:
Download Ruby driver for SQL
You can connect to a SQL Database using Ruby on Windows, Linux, or macOS.
Get started
Step 1: Configure development environment for Ruby development
Step 2: Create a SQL database for Ruby development
Step 3: Proof of concept connecting to SQL using Ruby
Documentation
Documentation at ruby-lang.org
Support
Ruby and tiny_tds are community-supported software. This software doesn't come with
Microsoft support. To get help, visit the community resources.
Community resources
Azure Ruby Developer Center
Samples
Getting Started with Ruby on macOS
Getting Started with Ruby on Ubuntu
Getting Started with Ruby on Red Hat Enterprise Linux (RHEL)
Public data sets for testing and
prototyping
Article • 03/16/2023
Applies to:
Azure SQL Database
Azure SQL Managed Instance
SQL Server
on Azure VM
Browse this list of public data sets for data that you can use to prototype and test
storage and analytics services and solutions.
US Census Statistical data about the population of the U.S. Data sets are in various
data formats.
Earth science Over 32,000 data collections covering agriculture, Data sets are in various
data from atmosphere, biosphere, climate, cryosphere, formats.
NASA human dimensions, hydrosphere, land surface,
oceans, sun-earth interactions, and more.
Airline flight "The U.S. Department of Transportation's (DOT) Files are in CSV format.
delays and Bureau of Transportation Statistics (BTS) tracks the
other on-time performance of domestic flights operated
transportation by large air carriers. Summary information on the
data number of on-time, delayed, canceled, and
diverted flights appears ... in summary tables
posted on this website."
Traffic "FARS is a nationwide census providing NHTSA, "Create your own fatality
fatalities - US Congress, and the American public yearly data data run online by using
Fatality regarding fatal injuries suffered in motor vehicle the FARS Query System. Or
Analysis traffic crashes." download all FARS data
Reporting from 1975 to present from
System the FTP Site."
(FARS)
Data source About the data About the files
Toxic chemical "EPA's most updated, publicly available high- Data sets are available in
data - EPA throughput toxicity data on thousands of various formats including
Toxicity chemicals. This data is generated through the spreadsheets, R packages,
ForeCaster EPA's ToxCast research effort." and MySQL database files.
(ToxCast™)
data
Toxic chemical "The 2014 Tox21 data challenge is designed to Data sets are available in
data - NIH help scientists understand the potential of the SMILES and SDF formats.
Tox21 Data chemicals and compounds being tested through The data provides "assay
Challenge the Toxicology in the 21st Century initiative to activity data and chemical
2014 disrupt biological pathways in ways that may result structures on the Tox21
in toxic effects." collection of ~10,000
compounds (Tox21 10K)."
Biotechnology Multiple data sets covering genes, genomes, and Data sets are in text, XML,
and genome proteins. BLAST, and other formats.
data from the A BLAST app is available.
NCBI
New York City "Taxi trip records include fields capturing pick-up and dropoff Data sets are
taxi data dates/times, pick-up and dropoff locations, trip distances, in CSV files by
itemized fares, rate types, payment types, and driver- month.
reported passenger counts."
Microsoft Multiple data sets covering human-computer interaction, Data sets are
Research data audio/video, data mining/information retrieval, in various
sets - "Data geospatial/location, natural language processing, and formats,
Science for robotics/computer vision. zipped for
Research" download.
Open Science "The Open Science Data Cloud provides the scientific Data sets are
Data Cloud community with resources for storing, sharing, and analyzing in various
data terabyte and petabyte-scale scientific datasets." formats.
Global climate "WorldClim is a set of global climate layers (gridded climate These files
data - data) with a spatial resolution of about 1 km2. These data can contain
WorldClim be used for mapping and spatial modeling." geospatial
data.
Data source About the data About the
files
Data about "The GDELT Project is the largest, most comprehensive, and The raw data
human society - highest resolution open database of human society ever files are in
The GDELT created." CSV format.
Project
Advertising click "The largest ever publicly released ML dataset." For more
prediction data info, see Criteo's 1 TB Click Prediction Dataset.
for machine
learning from
Criteo
ClueWeb09 text "The ClueWeb09 dataset was created to support research on See Dataset
mining data set information retrieval and related human language Information .
from The Lemur technologies. It consists of about 1 billion web pages in 10
Project languages that were collected in January and February 2009."
GitHub "The GHTorrent project [is] an effort to MySQL database dumps are in CSV
activity create a scalable, queryable, offline mirror of format.
data from data offered through the GitHub REST API.
The GHTorrent monitors the GitHub public event
GHTorrent time line. For each event, it retrieves its
project contents and their dependencies,
exhaustively."
Stack "This is an anonymized dump of all user- "Each site [such as Stack Overflow] is
Overflow contributed content on the Stack Exchange formatted as a separate archive
data network [including Stack Overflow]." consisting of XML files zipped via 7-
dump zip using bzip2 compression. Each
site archive includes Posts, Users,
Votes, Comments, PostHistory, and
PostLinks."
What's new in SQL Server on Azure
VMs? (Archive)
Article • 03/15/2023
Applies to:
SQL Server on Azure VM
This article summarizes older documentation changes associated with new features and
improvements in the recent releases of SQL Server on Azure VMs . To learn more
about SQL Server on Azure VMs, see the overview.
2021
Changes Details
Deployment It's now possible to configure the following options when deploying your SQL
configuration Server VM from an Azure Marketplace image: System database location, number
improvements of tempdb data files, collation, max degree of parallelism, min and max server
memory settings, and optimize for ad hoc workloads. Review Deploy SQL Server
VM to learn more.
Automated The possible maximum automated backup retention period has changed from
backup 30 days to 90, and you're now able to choose a specific container within the
improvements storage account. Review automated backup to learn more.
Tempdb You can now modify tempdb settings directly from the SQL virtual machines
configuration blade in the Azure portal, such as increasing the size, and adding data files.
Eliminate Deploy your SQL Server VMs to multiple subnets to eliminate the dependency
need for on the Azure Load Balancer or distributed network name (DNN) to route traffic
HADR Azure to your high availability / disaster recovery (HADR) solution! See the multi-
Load Balancer subnet availability group tutorial, or prepare SQL Server VM for FCI article to
or DNN learn more.
SQL It's now possible to assess the health of your SQL Server VM in the Azure portal
Assessment using SQL Assessment to surface recommendations that improve performance,
and identify missing best practices configurations. This feature is currently in
preview.
SQL IaaS Support has been added to register your SQL Server VM running on Ubuntu
Agent Linux with the SQL Server IaaS Extension for limited functionality.
extension now
supports
Ubuntu
Changes Details
SQL IaaS Restarting the SQL Server service is no longer necessary when registering your
Agent SQL Server VM with the SQL IaaS Agent extension!
extension full
mode no
longer
requires
restart
Repair SQL It's now possible to verify the status of your SQL Server IaaS Agent extension
Server IaaS directly from the Azure portal, and repair it, if necessary.
extension in
portal
Security Once you've enabled Microsoft Defender for SQL, you can view Security Center
enhancements recommendations in the SQL virtual machines resource in the Azure portal.
in the Azure
portal
HADR content We've refreshed and enhanced our high availability and disaster recovery
refresh (HADR) content! There's now an Overview of the Windows Server Failover
Cluster, as well as a consolidated how-to configure quorum for SQL Server VMs.
Additionally, we've enhanced the cluster best practices with more
comprehensive setting recommendations adopted to the cloud.
Migrate high Azure Migrate brings support to lift and shift your entire high availability
availability to solution to SQL Server on Azure VMs! Bring your availability group or your
VM failover cluster instance to SQL Server VMs using Azure Migrate today!
Performance We've rewritten, refreshed, and updated the performance best practices
best practices documentation, splitting one article into a series that contains: a checklist, VM
refresh size guidance, Storage guidance, and collecting baseline instructions.
2020
Changes Details
Azure It's now possible to register SQL Server virtual machines with the SQL IaaS Agent
Government extension for virtual machines hosted in the Azure Government cloud.
support
Azure SQL SQL Server on Azure Virtual Machines is now a part of the Azure SQL family of
family products. Check out our new look! Nothing has changed in the product, but the
documentation aims to make the Azure SQL product decision easier.
Changes Details
Distributed SQL Server 2019 on Windows Server 2016+ is now previewing support for routing
network traffic to your failover cluster instance (FCI) by using a distributed network name
name (DNN) rather than using Azure Load Balancer. This support simplifies and streamlines
connecting to your high-availability (HA) solution in Azure.
FCI with It's now possible to deploy your failover cluster instance (FCI) by using Azure
Azure shared disks.
shared disks
Reorganized The documentation around failover cluster instances with SQL Server on Azure
FCI docs VMs has been rewritten and reorganized for clarity. We've separated some of the
configuration content, like the cluster configuration best practices, how to prepare
a virtual machine for a SQL Server FCI, and how to configure Azure Load Balancer.
Migrate log Learn how you can migrate your log file to an ultra disk to leverage high
to ultra disk performance and low latency.
Create It's now possible to simplify the creation of an availability group by using Azure
availability PowerShell as well as the Azure CLI.
group using
Azure
PowerShell
Configure It's now possible to configure your availability group via the Azure portal. This
availability feature is currently in preview and being deployed so if your desired region is
group in unavailable, check back soon.
portal
Automatic You can now enable the Automatic registration feature to automatically register all
extension SQL Server VMs already deployed to your subscription with the SQL IaaS Agent
registration extension. This applies to all existing VMs, and will also automatically register all
SQL Server VMs added in the future.
DNN for You can now configure a distributed network name (DNN) listener) for SQL Server
availability 2019 CU8 and later to replace the traditional VNN listener, negating the need for
group an Azure Load Balancer.
2019
Changes Details
Free DR replica in You can host a free passive instance for disaster recovery in Azure for your
Azure on-premises SQL Server instance if you have Software Assurance .
Bulk SQL IaaS You can now bulk register SQL Server virtual machines with the SQL IaaS
Agent extension Agent extension.
registration
Changes Details
Performance- You can now fully customize your storage configuration when creating a new
optimized SQL Server VM.
storage
configuration
Premium file You can now create a failover cluster instance by using a Premium file share
share for FCI instead of the original method of Storage Spaces Direct.
Azure Dedicated You can run your SQL Server VM on Azure Dedicated Host.
Host
SQL Server VM Use Azure Site Recovery to migrate your SQL Server VM from one region to
migration to a another.
different region
New SQL IaaS It's now possible to install the SQL Server IaaS extension in lightweight mode
installation to avoid restarting the SQL Server service.
modes
SQL Server You can now change the edition property for your SQL Server VM.
edition
modification
Changes to the You can register your SQL Server VM with the SQL IaaS Agent extension by
SQL IaaS Agent using the new SQL IaaS modes. This capability includes Windows Server 2008
extension images.
New SQL Server There's now a way to manage your SQL Server VM in the Azure portal. For
VM management more information, see Manage SQL Server VMs in the Azure portal.
in the Azure
portal
Extended support Extend support for SQL Server 2008 and SQL Server 2008 R2 by migrating as
for SQL Server is to an Azure VM.
2008 and 2008
R2
Custom image You can now install the SQL Server IaaS extension to custom OS and SQL
supportability Server images, which offers the limited functionality of flexible licensing.
When you're registering your custom image with the SQL IaaS Agent
extension, specify the license type as "AHUB." Otherwise, the registration will
fail.
Changes Details
Named instance You can now use the SQL Server IaaS extension with a named instance, if the
supportability default instance has been uninstalled properly.
Portal The Azure portal experience for deploying a SQL Server VM has been
enhancement revamped to improve usability. For more information, see the brief quickstart
and more thorough how-to guide to deploy a SQL Server VM.
Portal It's now possible to change the licensing model for a SQL Server VM from
improvement pay-as-you-go to bring-your-own-license by using the Azure portal.
Simplification of It's now easier than ever to deploy an availability group to a SQL Server VM
availability group in Azure. You can use the Azure CLI to create the Windows failover cluster,
deployment to a internal load balancer, and availability group listeners, all from the command
SQL Server VM line. For more information, see Use the Azure CLI to configure an Always On
through the availability group for SQL Server on an Azure VM.
Azure CLI
2018
Changes Details
Automated setup It's now possible to create the Windows failover cluster, join SQL Server
of an availability VMs to it, create the listener, and configure the internal load balancer by
group deployment using two Azure Quickstart Templates. For more information, see Use Azure
with Azure Quickstart Templates to configure an Always On availability group for SQL
Quickstart Server on an Azure VM.
Templates
Automatic SQL Server VMs deployed after this month are automatically registered with
registration to the the new SQL IaaS Agent extension. SQL Server VMs deployed before this
SQL IaaS Agent month still need to be manually registered. For more information, see
extension Register a SQL Server virtual machine in Azure with the SQL IaaS Agent
extension.
Switch licensing You can now switch between the pay-per-usage and bring-your-own-
model license models for your SQL Server VM by using the Azure CLI or
PowerShell. For more information, see How to change the licensing model
for a SQL Server virtual machine in Azure.
Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.
Additional resources
Windows VMs:
Linux VMs:
Applies to:
Azure SQL Database
Azure SQL Managed Instance
In this article, learn how to resolve capacity errors when deploying Azure SQL Database
or Azure SQL Managed Instance resources.
Exceeded quota
If you encounter any of the following errors when attempting to deploy your Azure SQL
resource, please request to increase your quota:
Server quota limit has been reached for this location. Please select a
Could not perform the operation because server would exceed the allowed
Database Throughput Unit quota of xx.
Could not perform the operation because server would exceed the allowed
Subscription access
Your subscription may not have access to create a server in the selected region if your
subscription has not been registered with the SQL resource provider (RP).
If you see the following errors, please register your subscription with the SQL RP:
Your subscription does not have access to create a server in the selected
region.
For exceptions to this rule please open a support request with issue type of
'Service and subscription limits'
Location 'region name' is not accepting creation of new Windows Azure SQL
Database servers for the subscription 'subscription id' at this time
Enable region
Your subscription may not have access to create a server in the selected region if that
region has not been enabled. To resolve this, file a support request to enable a specific
region for your subscription.
If you see the following errors, file a support ticket to enable a specific region:
Your subscription does not have access to create a server in the selected
region.
For exceptions to this rule please open a support request with issue type of
'Service and subscription limits'
Location 'region name' is not accepting creation of new Windows Azure SQL
Database servers for the subscription 'subscription id' at this time
You can register your subscription using the Azure portal, the Azure CLI, or Azure
PowerShell.
Azure portal
If your subscription is part of an Azure Program offering, and you would like to request
access to any of the following regions, please consider using an alternate region instead:
Australia Central, Australia Central 2, Australia SouthEast, Brazil SouthEast, Canada East,
China East, China North, China North 2, France South, Germany North, Japan West, JIO
India Central, JIO India West, Korea South, Norway West, South Africa West, South India,
Switzerland West, UAE Central , UK West, US DoD Central, US DoD East, US Gov Arizona,
US Gov Texas, West Central US, West India.
Next steps
After you submit your request, it will be reviewed. You will be contacted with an answer
based on the information you provided in the form.
For more information about other Azure limits, see Azure subscription and service limits,
quotas, and constraints.
Understanding the changes in the Root
CA change for Azure SQL Database &
SQL Managed Instance
Article • 02/24/2023
Azure SQL Database & SQL Managed Instance will be changing the root certificate for
the client application/driver enabled with SSL, used to establish secure TDS connection.
The current root certificate is set to expire October 26, 2020 as part of standard
maintenance and security best practices. This article gives you more details about the
upcoming changes, the resources that will be affected, and the steps needed to ensure
that your application maintains connectivity to your database server.
The new certificate will be used starting October 26, 2020. If you use full validation of
the server certificate when connecting from a SQL client (TrustServerCertificate=false),
you need to ensure that your SQL client would be able to validate new root certificate
before October 26, 2020.
If you are not using SSL/TLS currently, there is no impact to your application availability.
You can verify if your client application is trying to verify root certificate by looking at
the connection string. If TrustServerCertificate is explicitly set to true then you are not
affected.
If your client driver utilizes OS certificate store, as majority of drivers do, and your OS is
regularly maintained this change will likely not affect you, as the root certificate we are
switching to should be already available in your Trusted Root Certificate Store. Check for
Baltimore CyberDigiCert GlobalRoot G2 and validate it is present.
If your client driver utilizes local file certificate store, to avoid your application's
availability being interrupted due to certificates being unexpectedly revoked, or to
update a certificate, which has been revoked, refer to the What do I need to do to
maintain connectivity section.
Download Baltimore CyberTrust Root & DigiCert GlobalRoot G2 Root CA from links
below:
https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem
https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
No. Since the change here is only on the client side to connect to the server, there is no
maintenance downtime needed here for this change.
ARCHITECTURE CONCEPT
Browse Azure architectures Explore cloud best practices
Technology Areas
Explore architectures and guides for different technologies
Analytics Databases
b Analytics architecture design b Databases architecture design
p Choose an analytical data store in Azure p Big Data architectures
p Choose a data analytics technology in Azure p Build a scalable system for massive data
Y Analytics end-to-end with Azure Synapse p Choose a data store
Y Automated enterprise BI with Azure Data p Extract, transform, and load (ETL)
Factory
p Online analytical processing (OLAP)
Y Stream processing with Azure Databricks p Online transaction processing (OLTP)
` Databricks Monitoring p Data warehousing in Microsoft Azure
Y Advanced analytics architecture
b Data lakes
b IoT analytics for construction
b
Y Real-time fraud detection b Extend on-premises data solutions to the cloud
Y Mining equipment monitoring b Free-form text search
Y Predict the length of stay in hospitals b Time series solutions
See more T
VM workloads SAP
Y Linux VM deployment Y Overview
Y Windows VM deployment Y SAP HANA on Azure (Large Instances)
Y N-tier application with Cassandra (Linux) Y SAP HANA Scale-up on Linux
Y N-tier application with SQL Server (Windows) Y SAP NetWeaver on Windows on Azure
Y Multi-region N-tier application Y SAP S/4HANA in Linux on Azure
b Highly scalable WordPress Y SAP BW/4HANA in Linux on Azure
b Multi-tier Windows Y SAP NetWeaver on SQL Server
Y SAP deployment using an Oracle DB
Y Dev/test for SAP
Web apps
Y Basic web application
Y Baseline zone-redundant web application
Y Multi-region deployment
Y Web application monitoring
b E-commerce API management
b E-commerce front-end
b E-commerce product search
b Publishing internal APIs externally
b Securely managed web application
b Highly available web application
Build your skills with Microsoft Learn training